I dont think I never learned something because it would make me a better worker / provide me with more economical resources when I was a child and was in need of tutoring, I got lucky to have a somewhat curious mind and I tried to saciate it.
Of course as an adult I choose to do things that are useful, overall, and that normally repercutes on being a human with skills that other people pay for. But the explicit bayesian calculation about knowledge and money is not one I tend to do, what interest me interest me. Of course when trying to learn something as an adult the friction of the subject is a marker that determines how likely I’m going to try to acquire the information, for example, I have read about Laser gyros, but the mountain of knowledge was too unsumournable to actually learn how laser gyros work, really.
If the LLMs can lower the friction I guess everybody will be more likely to learn things. Also, there is no “big mistery” in most fields, you just need a structured idea of a certain amount of concepts. Some of them more palatable than others. (I know what a sigmoid activation is, but I dont have the high school knowledge about the funcion very fresh on my mind) These tools could help with it.
(I tried to parse this comment on Bing writing tool to get a better output and it just came as more corporate) English is not my first language but this just feels worse.
I have always been curious about learning new things, regardless of their economic value or usefulness for my career. When I was a child and needed tutoring, I did not choose subjects based on how they would make me a better worker or provide me with more resources. Instead, I followed my interests and tried to satisfy them. Of course, as an adult, I also consider the practical aspects of learning something new, such as how it can benefit me professionally or personally.
But I do not usually make explicit Bayesian calculations about knowledge and money; rather, I learn what interests me. However, sometimes the difficulty of learning something can discourage me from pursuing it further. For example, I have read about laser gyros, but the amount of knowledge required to understand how they work was too overwhelming for me.
If there were tools that could lower the friction of learning new things, such as language models that could explain concepts in simple terms or provide structured overviews of different fields, I think everyone would be more likely to learn more things.
After all, most fields do not have “big mysteries” that are impossible to grasp; they just require familiarity with certain concepts and their relationships. Some of these concepts may be more intuitive than others (for instance, I know what a sigmoid activation is in neural networks, but I do not remember much about the function itself from high school math). These tools could help bridge these gaps and make learning easier and more enjoyable.
If the LLMs can lower the friction I guess everybody will be more likely to learn things. Also, there is no “big mistery” in most fields, you just need a structured idea of a certain amount of concepts.
This is not the case. Maybe your memory capacity is naturally on the upper side of the human range (e.g., 9 pieces rather than 4) as well as IQ, which makes learning for you seem doable. The fact is, most people, no matter how hard they try, are probably incapable of learning calculus, let alone tensor algebra or something even more abstract or complex, such as the math that is needed even to begin to approach string theory. Or, something that requires keeping simultaneous track of many moving pieces. For example, it’s considered that no human can properly understand how the brain works: it requires simultaneously holding in one’s head dozens or hundreds of moving pieces. AI can do this, but a human can’t.
I saw this mistake made by David Deutsch: because he himself is a genius and can relatively easily understand anything that any other human can write, he conjectured that the “human understanding has universal reach”. Stephen Wolfram, another genius, concurred.
This is why I don’t place much confidence in projections about how the population will be affected by TAI from people like Sam Altman either. You have to consider they are very likely to be completely out of touch with the average person and so have absolutely terrible intuitions about how they respond to anything, let alone forecasting long term implications for them stemming from TAI. If you get some normal people together and make sure they take the proposition of TAI and everything it entails seriously, (such as widespread joblessness), I suspect you would encounter a lot more fear/apprehension around the kind of behaviours and ways of living that is going to produce.
I dont think I never learned something because it would make me a better worker / provide me with more economical resources when I was a child and was in need of tutoring, I got lucky to have a somewhat curious mind and I tried to saciate it.
Of course as an adult I choose to do things that are useful, overall, and that normally repercutes on being a human with skills that other people pay for. But the explicit bayesian calculation about knowledge and money is not one I tend to do, what interest me interest me. Of course when trying to learn something as an adult the friction of the subject is a marker that determines how likely I’m going to try to acquire the information, for example, I have read about Laser gyros, but the mountain of knowledge was too unsumournable to actually learn how laser gyros work, really.
If the LLMs can lower the friction I guess everybody will be more likely to learn things. Also, there is no “big mistery” in most fields, you just need a structured idea of a certain amount of concepts. Some of them more palatable than others. (I know what a sigmoid activation is, but I dont have the high school knowledge about the funcion very fresh on my mind) These tools could help with it.
(I tried to parse this comment on Bing writing tool to get a better output and it just came as more corporate) English is not my first language but this just feels worse.
I have always been curious about learning new things, regardless of their economic value or usefulness for my career. When I was a child and needed tutoring, I did not choose subjects based on how they would make me a better worker or provide me with more resources. Instead, I followed my interests and tried to satisfy them. Of course, as an adult, I also consider the practical aspects of learning something new, such as how it can benefit me professionally or personally.
But I do not usually make explicit Bayesian calculations about knowledge and money; rather, I learn what interests me. However, sometimes the difficulty of learning something can discourage me from pursuing it further. For example, I have read about laser gyros, but the amount of knowledge required to understand how they work was too overwhelming for me.
If there were tools that could lower the friction of learning new things, such as language models that could explain concepts in simple terms or provide structured overviews of different fields, I think everyone would be more likely to learn more things.
After all, most fields do not have “big mysteries” that are impossible to grasp; they just require familiarity with certain concepts and their relationships. Some of these concepts may be more intuitive than others (for instance, I know what a sigmoid activation is in neural networks, but I do not remember much about the function itself from high school math). These tools could help bridge these gaps and make learning easier and more enjoyable.
This is not the case. Maybe your memory capacity is naturally on the upper side of the human range (e.g., 9 pieces rather than 4) as well as IQ, which makes learning for you seem doable. The fact is, most people, no matter how hard they try, are probably incapable of learning calculus, let alone tensor algebra or something even more abstract or complex, such as the math that is needed even to begin to approach string theory. Or, something that requires keeping simultaneous track of many moving pieces. For example, it’s considered that no human can properly understand how the brain works: it requires simultaneously holding in one’s head dozens or hundreds of moving pieces. AI can do this, but a human can’t.
I saw this mistake made by David Deutsch: because he himself is a genius and can relatively easily understand anything that any other human can write, he conjectured that the “human understanding has universal reach”. Stephen Wolfram, another genius, concurred.
This is why I don’t place much confidence in projections about how the population will be affected by TAI from people like Sam Altman either. You have to consider they are very likely to be completely out of touch with the average person and so have absolutely terrible intuitions about how they respond to anything, let alone forecasting long term implications for them stemming from TAI. If you get some normal people together and make sure they take the proposition of TAI and everything it entails seriously, (such as widespread joblessness), I suspect you would encounter a lot more fear/apprehension around the kind of behaviours and ways of living that is going to produce.