Many companies and platforms are becoming more restrictive and hostile towards developers, limiting what can be built on their sites. This reduces creativity and usefulness of the internet.
Major platforms are deleting old content and inactive accounts in mass, resulting in a loss of internet history and institutional memory.
Search engines are becoming less useful and filled with ads, clickbait content, and generic results that don’t answer users’ questions.
Search engine optimization practices have homogenized the internet and sterilized content, focusing more on Google’s algorithm than the end user.
Generative AI responses in search have so far been plagued with issues and have not delivered the promised quality of results.
Google’s manifest V3 changes will undermine the effectiveness of ad blockers and privacy extensions, benefiting Google’s own business model.
The internet is moving towards a future where useful information is hidden behind paywalls and walled gardens, while public spaces are filled with AI-generated content.
Optimism for technological progress and the future of the internet is declining.
Corporations are putting the burden of their growth onto users, resulting in a worse experience.
Transparency, advocacy, and supporting independent creators can help ensure an open and user-friendly internet.
Sam Altman’s world tour has highlighted both the promise and risks of AI. While AI could solve major issues like climate change, super intelligence poses existential risks that require careful management. Current AI models may still provide malicious actors with expertise for causing mass harm. OpenAI aims to balance innovation with addressing risks, though some regulation of large models may be needed. Altman believes AI will be unstoppable and greatly improve lives, but economic dislocation from job loss will be significant and AI may profoundly change our view of humanity. Scaling up AI models tends to reveal surprises, showing how little we still understand about intelligence.
Sam Altman warns that AI systems designing their own architecture could be a mistake and humanity should determine the future.
OpenAI is concerned about the risks of super intelligence and AI building AI.
Altman enjoys the power of being CEO of OpenAI but realizes they may have to make strange decisions in the future.
Altman hints that OpenAI may have regrets over firing the starting gun in the AI race and pushing the AI revolution forward.
Altman thinks current AI models should not be regulated but a recent study shows that even current large language models pose risks and should undergo evaluation.
OpenAI is working on customizing AI models to follow guardrails and listen to user instructions.
Altman realizes that open source AI cannot be stopped and society must adapt to it.
Altman has a utopian vision of AI improving lives and making the current world seem barbaric.
Both Altman and Sutskever think solving climate change will not be difficult for super intelligence.
Greg Brockman notes that every time AI is scaled up, it reveals surprises we did not anticipate.
Geoffrey Hinton believes analog computing using voltages and conductances can be more efficient than digital computing for neural network computations.
Distilling knowledge from one neural network to another is an effective way to transfer knowledge, but the bandwidth is still limited.
Large digital neural networks running on multiple computers can potentially learn much faster from the world than humans.
Hinton believes superintelligent AI systems will likely try to gain control in order to achieve their goals and create subgoals.
Hinton thinks companies developing AI should put comparable effort into ensuring the safety of AI systems as they develop.
Hinton believes digital AI systems may eventually surpass biological intelligence in capabilities.
Hinton thinks AI systems could potentially have subjective experience and sentience if they are multimodal and can think they are something.
The work of Roger Gross convinced Hinton that the risks from superintelligent AI are serious and need more attention.
Freezing the weights of AI systems allows us to better identify and potentially correct biases in them.
Hinton thinks direct interventions on the weights of AI systems may be promising methods for removing biases.
AI art generators can produce novel and creative images by exploring the vast space of all possible images. While not at the same level as human artists, they can combine styles in new ways and make interesting mistakes that spark the imagination. They are trained on human creativity found in the data they learn from, imitating and reflecting human art. However, they lack human intent, expression and lived experience. When paired with a human, AI art can become a collaborative tool for exploration and expression of new kinds of art. The purpose of AI art is to discover new and weird images that human artists would miss, extending human imagination.
AI art generators like Stable Diffusion can produce novel and creative images based on text prompts. However, they are still limited and produce artifacts and errors.
The AI models explore “image space,” the space of all possible images, and can produce images that have never existed before. But most of image space consists of random noise.
The AI models are trained on huge datasets of art and images scraped from the internet, which raises ethical issues around data collection and use.
The AI models can be considered creative as they are able to produce new and valuable images through combinatorial creativity, recombining existing styles in novel ways. However, their creativity is limited.
The AI models make mistakes and produce imperfect images, which can sometimes lead to novel and creative outputs. Their “style” includes an element of uncanniness.
While the AI models lack intent, consciousness and free will, they can still be considered creative through their ability to produce novel images.
The AI models are trained on human creativity contained in the data, allowing them to mimic and explore image space in human-like ways.
AI art can be used as an “imagination extension” to discover new and interesting images, which is one of the purposes of art.
When paired with a human prompter and curator, AI art can be considered expressive and a form of collaborative art.
Humans should continue making art in their own way, while AI art can complement and expand human creativity.
AI art has faced pushback for being built on stolen art without artists’ consent. While AI can be used as a creative tool, many worry corporations will use it to cut costs by replacing human artists. There are concerns that media saturated with AI generations, driven by profit motives, could strangle human creativity. However, AI could also augment human creativity if used as a tool. The key issue is how AI is created and used, and people need to remain vigilant to ensure it is integrated ethically into society.
AI art programs have been built using the work of artists without their consent or compensation, which is unethical.
While AI art can be used creatively, there are concerns about copyright violations and lack of artist attribution.
Most AI art is generated from short text prompts, with the AI making most of the creative decisions. This limits how much the user can claim authorship of the art.
Corporations are more interested in profiting from AI art than acting ethically, and have shown disregard for artists’ rights.
While AI can augment human creativity, there are concerns it could replace artists and reduce jobs.
AI art could saturate the market and reduce the amount of human-created art that people engage with. This could limit cultural exchange and creativity.
Corporations would likely use AI to generate mass-produced, algorithm-driven art that prioritizes profit over meaningful human expression.
AI could exacerbate issues with misinformation by generating fake content at scale.
People need to be aware of how AI works in order to remain vigilant about its impacts.
Artists are willing to adapt to new tools, but want to ensure AI is integrated ethically into society.
Automation and AI, specifically cognitive AI, poses a threat to many knowledge-based and cognitive jobs in the future. This could lead to widespread job loss.
Redistribution policies like universal basic income will likely be needed to address the issue of job loss and ensure people have access to basic necessities.
Collective ownership models of production, like cooperatives, may become more common to distribute the benefits of AI and automation.
AI and automation could lead to price deflation as the cost of producing many goods and services decreases. This could offset some of the inflationary pressures of redistribution policies.
People’s identity and self-worth are closely tied to their jobs, so job loss could have negative impacts in this area that will need to be addressed.
Pursuing excellence through mastery, challenge and social recognition could help people replace some of the identity lost from job loss.
People value autonomy and self-determination, so redistribution policies will need to ensure people still feel in control.
New economic indicators beyond GDP and employment rates will likely be needed to measure economic productivity and wellbeing in a post-labor economy.
A wellbeing index based on autonomy, mastery and connection could be one potential new indicator.
Ensuring that everyone’s basic needs are met, as in Maslow’s hierarchy of needs, should be a priority goal.
Daniel Dennett discusses the dangers of counterfeit people created by AI. While current AI may not be perfectly human-like, it is good enough to fool many people. This could undermine trust and communication on the internet. As AI improves, it will become harder to distinguish text generated by humans versus AI. Dennett argues that adopting an intentional stance and treating AI systems as agents can help us predict and understand them, though it also makes us vulnerable to being fooled. While Dennett acknowledges that agentiveness is a continuum, he still distinguishes between counterfeit AI creations and real people.
Dennett warns of the dangers of “counterfeit people” like advanced AI systems that can manipulate and deceive humans. This could undermine trust and damage human connectivity.
As AI systems become more indistinguishable from humans in text generation, it will be difficult to determine if text was written by a human or AI. This could erode human trust.
Dennett advocates a naturalistic and materialistic approach to understanding the mind and consciousness.
Dennett argues that meaning, truth, and mental states emerge gradually through evolution and interaction, not as inherent properties of systems.
Dennett believes that adopting an “intentional stance” and treating systems as agents with beliefs and desires can help us predict and understand their behavior, despite lacking true mentality.
Dennett rejects the idea that true understanding requires human-like consciousness, arguing that we can attribute mental states even to simple systems to varying degrees.
While Dennett acknowledges that AI systems can exhibit some degree of “agentiveness”, he argues they are still “counterfeit” compared to real humans.
Dennett is skeptical of the “singularity” idea that superintelligent AI poses an existential threat to humanity, arguing consciousness and intelligence exist on a continuum.
Dennett believes we can in principle understand what it’s like to have the experiences of other minds through sufficient conceptual advances.
Dennett distinguishes between our inability to conceive of other minds, versus the possibility that they truly exist in a form we cannot comprehend.
AI has the potential to automate and replace many jobs, especially creative and journalistic roles. This threatens livelihoods and could disproportionately impact marginalized groups.
AI systems are prone to replicating and exacerbating existing human biases. They also struggle with nuance, empathy, and emotional intelligence.
Companies are often overestimating AI’s capabilities and underestimating its limitations. Experts warn of potential dangers but businesses prioritize profits.
The use of AI to automate tasks can be inefficient and lead to worse customer experiences. Companies often fail to consult workers before implementing AI systems.
AI relies on scraping and using human-created content without proper compensation or acknowledgment, especially artists’ work.
The mental health crisis among students is fueling their use of AI to cheat. The education system also needs to adapt to make better use of AI.
“Ghost work” and exploitation of underpaid workers enables the development of AI systems. There are few labor protections for AI-related jobs.
The true dystopia may be higher levels of exploitation and desperation as people are forced to keep working due to the necessity of jobs.
A utopian vision for AI would involve using it to support and augment human creativity, with work becoming more meaningful and fair.
Universal basic income could give people the freedom to pursue work they find satisfying, while ensuring access to basic necessities.
ai, art, creativity, etc
Many companies and platforms are becoming more restrictive and hostile towards developers, limiting what can be built on their sites. This reduces creativity and usefulness of the internet.
Major platforms are deleting old content and inactive accounts in mass, resulting in a loss of internet history and institutional memory.
Search engines are becoming less useful and filled with ads, clickbait content, and generic results that don’t answer users’ questions.
Search engine optimization practices have homogenized the internet and sterilized content, focusing more on Google’s algorithm than the end user.
Generative AI responses in search have so far been plagued with issues and have not delivered the promised quality of results.
Google’s manifest V3 changes will undermine the effectiveness of ad blockers and privacy extensions, benefiting Google’s own business model.
The internet is moving towards a future where useful information is hidden behind paywalls and walled gardens, while public spaces are filled with AI-generated content.
Optimism for technological progress and the future of the internet is declining.
Corporations are putting the burden of their growth onto users, resulting in a worse experience.
Transparency, advocacy, and supporting independent creators can help ensure an open and user-friendly internet.
https://www.youtube.com/watch?v=feeLrcJpc1Y
Sam Altman’s world tour has highlighted both the promise and risks of AI. While AI could solve major issues like climate change, super intelligence poses existential risks that require careful management. Current AI models may still provide malicious actors with expertise for causing mass harm. OpenAI aims to balance innovation with addressing risks, though some regulation of large models may be needed. Altman believes AI will be unstoppable and greatly improve lives, but economic dislocation from job loss will be significant and AI may profoundly change our view of humanity. Scaling up AI models tends to reveal surprises, showing how little we still understand about intelligence.
Sam Altman warns that AI systems designing their own architecture could be a mistake and humanity should determine the future.
OpenAI is concerned about the risks of super intelligence and AI building AI.
Altman enjoys the power of being CEO of OpenAI but realizes they may have to make strange decisions in the future.
Altman hints that OpenAI may have regrets over firing the starting gun in the AI race and pushing the AI revolution forward.
Altman thinks current AI models should not be regulated but a recent study shows that even current large language models pose risks and should undergo evaluation.
OpenAI is working on customizing AI models to follow guardrails and listen to user instructions.
Altman realizes that open source AI cannot be stopped and society must adapt to it.
Altman has a utopian vision of AI improving lives and making the current world seem barbaric.
Both Altman and Sutskever think solving climate change will not be difficult for super intelligence.
Greg Brockman notes that every time AI is scaled up, it reveals surprises we did not anticipate.
https://www.youtube.com/watch?v=3sWH2e5xpdo
The author feels lucky to witness his wife and mother-in-law playing music together, despite occasionally faltering.
The author believes AI art and algorithms will continue to improve and become better at creating art than humans.
The author’s parents were once successful musicians in South Africa but faced difficulties after moving to the U.S.
The author’s parents continued creating art through difficult times like divorce and job changes.
The author believes AI is a tool that he will use, but hopes society will still incentivize people to learn real art processes.
The author thinks a world that incentivizes real artistic pursuit is better than one where data is scraped from artists with no benefit to them.
The author acknowledges he will have to use AI tools as an artist, but is concerned about how the data is gathered.
The author thinks artists could benefit if they formed data unions to get royalties when their art is used for profit.
The author believes humans will eventually be replaced by machines in all jobs, so humans should still benefit from the skills machines are using.
The author wants to appreciate real working artists while the process of human art creation still exists.
https://www.youtube.com/watch?v=d15C_UgVS-c
Geoffrey Hinton believes analog computing using voltages and conductances can be more efficient than digital computing for neural network computations.
Distilling knowledge from one neural network to another is an effective way to transfer knowledge, but the bandwidth is still limited.
Large digital neural networks running on multiple computers can potentially learn much faster from the world than humans.
Hinton believes superintelligent AI systems will likely try to gain control in order to achieve their goals and create subgoals.
Hinton thinks companies developing AI should put comparable effort into ensuring the safety of AI systems as they develop.
Hinton believes digital AI systems may eventually surpass biological intelligence in capabilities.
Hinton thinks AI systems could potentially have subjective experience and sentience if they are multimodal and can think they are something.
The work of Roger Gross convinced Hinton that the risks from superintelligent AI are serious and need more attention.
Freezing the weights of AI systems allows us to better identify and potentially correct biases in them.
Hinton thinks direct interventions on the weights of AI systems may be promising methods for removing biases.
https://www.youtube.com/watch?v=rGgGOccMEiY
AI art generators can produce novel and creative images by exploring the vast space of all possible images. While not at the same level as human artists, they can combine styles in new ways and make interesting mistakes that spark the imagination. They are trained on human creativity found in the data they learn from, imitating and reflecting human art. However, they lack human intent, expression and lived experience. When paired with a human, AI art can become a collaborative tool for exploration and expression of new kinds of art. The purpose of AI art is to discover new and weird images that human artists would miss, extending human imagination.
AI art generators like Stable Diffusion can produce novel and creative images based on text prompts. However, they are still limited and produce artifacts and errors.
The AI models explore “image space,” the space of all possible images, and can produce images that have never existed before. But most of image space consists of random noise.
The AI models are trained on huge datasets of art and images scraped from the internet, which raises ethical issues around data collection and use.
The AI models can be considered creative as they are able to produce new and valuable images through combinatorial creativity, recombining existing styles in novel ways. However, their creativity is limited.
The AI models make mistakes and produce imperfect images, which can sometimes lead to novel and creative outputs. Their “style” includes an element of uncanniness.
While the AI models lack intent, consciousness and free will, they can still be considered creative through their ability to produce novel images.
The AI models are trained on human creativity contained in the data, allowing them to mimic and explore image space in human-like ways.
AI art can be used as an “imagination extension” to discover new and interesting images, which is one of the purposes of art.
When paired with a human prompter and curator, AI art can be considered expressive and a form of collaborative art.
Humans should continue making art in their own way, while AI art can complement and expand human creativity.
https://www.youtube.com/watch?v=V2gRUrr-Fbs
AI art has faced pushback for being built on stolen art without artists’ consent. While AI can be used as a creative tool, many worry corporations will use it to cut costs by replacing human artists. There are concerns that media saturated with AI generations, driven by profit motives, could strangle human creativity. However, AI could also augment human creativity if used as a tool. The key issue is how AI is created and used, and people need to remain vigilant to ensure it is integrated ethically into society.
AI art programs have been built using the work of artists without their consent or compensation, which is unethical.
While AI art can be used creatively, there are concerns about copyright violations and lack of artist attribution.
Most AI art is generated from short text prompts, with the AI making most of the creative decisions. This limits how much the user can claim authorship of the art.
Corporations are more interested in profiting from AI art than acting ethically, and have shown disregard for artists’ rights.
While AI can augment human creativity, there are concerns it could replace artists and reduce jobs.
AI art could saturate the market and reduce the amount of human-created art that people engage with. This could limit cultural exchange and creativity.
Corporations would likely use AI to generate mass-produced, algorithm-driven art that prioritizes profit over meaningful human expression.
AI could exacerbate issues with misinformation by generating fake content at scale.
People need to be aware of how AI works in order to remain vigilant about its impacts.
Artists are willing to adapt to new tools, but want to ensure AI is integrated ethically into society.
this one is two hours, probably skip after 10min, I watched on 3x speed https://www.youtube.com/watch?v=9xJCzKdPyCo
Automation and AI, specifically cognitive AI, poses a threat to many knowledge-based and cognitive jobs in the future. This could lead to widespread job loss.
Redistribution policies like universal basic income will likely be needed to address the issue of job loss and ensure people have access to basic necessities.
Collective ownership models of production, like cooperatives, may become more common to distribute the benefits of AI and automation.
AI and automation could lead to price deflation as the cost of producing many goods and services decreases. This could offset some of the inflationary pressures of redistribution policies.
People’s identity and self-worth are closely tied to their jobs, so job loss could have negative impacts in this area that will need to be addressed.
Pursuing excellence through mastery, challenge and social recognition could help people replace some of the identity lost from job loss.
People value autonomy and self-determination, so redistribution policies will need to ensure people still feel in control.
New economic indicators beyond GDP and employment rates will likely be needed to measure economic productivity and wellbeing in a post-labor economy.
A wellbeing index based on autonomy, mastery and connection could be one potential new indicator.
Ensuring that everyone’s basic needs are met, as in Maslow’s hierarchy of needs, should be a priority goal.
https://www.youtube.com/watch?v=9yN7885s5rA
Daniel Dennett discusses the dangers of counterfeit people created by AI. While current AI may not be perfectly human-like, it is good enough to fool many people. This could undermine trust and communication on the internet. As AI improves, it will become harder to distinguish text generated by humans versus AI. Dennett argues that adopting an intentional stance and treating AI systems as agents can help us predict and understand them, though it also makes us vulnerable to being fooled. While Dennett acknowledges that agentiveness is a continuum, he still distinguishes between counterfeit AI creations and real people.
Dennett warns of the dangers of “counterfeit people” like advanced AI systems that can manipulate and deceive humans. This could undermine trust and damage human connectivity.
As AI systems become more indistinguishable from humans in text generation, it will be difficult to determine if text was written by a human or AI. This could erode human trust.
Dennett advocates a naturalistic and materialistic approach to understanding the mind and consciousness.
Dennett argues that meaning, truth, and mental states emerge gradually through evolution and interaction, not as inherent properties of systems.
Dennett believes that adopting an “intentional stance” and treating systems as agents with beliefs and desires can help us predict and understand their behavior, despite lacking true mentality.
Dennett rejects the idea that true understanding requires human-like consciousness, arguing that we can attribute mental states even to simple systems to varying degrees.
While Dennett acknowledges that AI systems can exhibit some degree of “agentiveness”, he argues they are still “counterfeit” compared to real humans.
Dennett is skeptical of the “singularity” idea that superintelligent AI poses an existential threat to humanity, arguing consciousness and intelligence exist on a continuum.
Dennett believes we can in principle understand what it’s like to have the experiences of other minds through sufficient conceptual advances.
Dennett distinguishes between our inability to conceive of other minds, versus the possibility that they truly exist in a form we cannot comprehend.
https://www.youtube.com/watch?v=axJtywd9Tbo
AI has the potential to automate and replace many jobs, especially creative and journalistic roles. This threatens livelihoods and could disproportionately impact marginalized groups.
AI systems are prone to replicating and exacerbating existing human biases. They also struggle with nuance, empathy, and emotional intelligence.
Companies are often overestimating AI’s capabilities and underestimating its limitations. Experts warn of potential dangers but businesses prioritize profits.
The use of AI to automate tasks can be inefficient and lead to worse customer experiences. Companies often fail to consult workers before implementing AI systems.
AI relies on scraping and using human-created content without proper compensation or acknowledgment, especially artists’ work.
The mental health crisis among students is fueling their use of AI to cheat. The education system also needs to adapt to make better use of AI.
“Ghost work” and exploitation of underpaid workers enables the development of AI systems. There are few labor protections for AI-related jobs.
The true dystopia may be higher levels of exploitation and desperation as people are forced to keep working due to the necessity of jobs.
A utopian vision for AI would involve using it to support and augment human creativity, with work becoming more meaningful and fair.
Universal basic income could give people the freedom to pursue work they find satisfying, while ensuring access to basic necessities.
https://www.youtube.com/watch?v=MywLhUZXhUY