Puzzle for you: Who thinks the latest ads for Gemini are good marketing and why?
AI generated meditating capybara: “Breath in (email summarisations)… Breath out (smart contextual replies)”
It summarises emails. It’s not exciting, it’s not technically impressive, and it isn’t super useful. It’s also pretty tone-deaf, a lot of people feel antipathy toward AI, and inserting it into human communication is the perfect way to aggravate this feeling.
“Create some crazy fruit creatures!”
Yes? And? I can only see this as directed at children. If so, where’s the… fun part? There’s nothing to engage with, no game loop. They’d get bored of it within minutes.
You want to show off how impressive your product is. People are saying there’s an AI bubble. So you REALLY want interesting, fun, novel, or useful applications for your tech.
It’s Google! They know about ads! They’ve lots of money! they CAN come up interesting, fun, novel, or useful applications for their tech.
My working model of psychosis is “lack of a stable/intact ego”, where my working model of an “ego” is “the thing you can use to predict your own actions so as to make successful multi-step plans, such as ‘I will buy pasta, so that I can make it on Thursday for our guests.’
from Adele Lopez’s Shortform. I probably haven’t experienced psychosis, but the description of self-prediction determining behavior/planning, and this self-prediction being faulty or unstable was eerie; this dominates my experience. I’m unsure about their definition of ego, I understood it to mean “sense of self”, but by that definition I still have a faint ego, and these things, for me, seem strongly related.
I didn’t think these traits were common among people who frequent physical and virtual rationalist spaces, from my impressions of them. I have the impression that a rationalist has a strong sense of self, which is strongly tied to their interests/passions/goals. The impressions I’m getting from some more familiar, in particular with irl rationalist networks/communities, is that, though the latter’s true, not so the former.
Seeing dementia up-close has prompted some reflection. I see a person with several deficits, the most striking is that they’re unable to perceive salience (what does this mean to me?). No perception is unimportant, everything is relevant. This makes them vulnerable to the modern internet, where websites are ad-funded thus attention-optimised. Unable to see something and think
“why am I being shown this? should I pay attention to it? hmm, no, not important”
One’s ability to perceive [what’s being said] alongside [intent behind the action of saying] helps us communicate more effectively with salience-perceptive people.
People less salience-perceptive are both harder to communicate with, and more vulnerable to manipulation.
Working memory deficit (symptom of ADHD) is similar and maybe sufficient. But this isn’t necessary: I’ve seen reduced/differently calibrated salience sense in autistic people as well.
Future of life institute open letter:
Cosign here
Puzzle for you: Who thinks the latest ads for Gemini are good marketing and why?
It summarises emails. It’s not exciting, it’s not technically impressive, and it isn’t super useful. It’s also pretty tone-deaf, a lot of people feel antipathy toward AI, and inserting it into human communication is the perfect way to aggravate this feeling.
“Create some crazy fruit creatures!”
Yes? And? I can only see this as directed at children. If so, where’s the… fun part? There’s nothing to engage with, no game loop. They’d get bored of it within minutes.
You want to show off how impressive your product is. People are saying there’s an AI bubble. So you REALLY want interesting, fun, novel, or useful applications for your tech.
It’s Google! They know about ads! They’ve lots of money! they CAN come up interesting, fun, novel, or useful applications for their tech.
Why didn’t they?!
from Adele Lopez’s Shortform. I probably haven’t experienced psychosis, but the description of self-prediction determining behavior/planning, and this self-prediction being faulty or unstable was eerie; this dominates my experience. I’m unsure about their definition of ego, I understood it to mean “sense of self”, but by that definition I still have a faint ego, and these things, for me, seem strongly related.
I didn’t think these traits were common among people who frequent physical and virtual rationalist spaces, from my impressions of them. I have the impression that a rationalist has a strong sense of self, which is strongly tied to their interests/passions/goals. The impressions I’m getting from some more familiar, in particular with irl rationalist networks/communities, is that, though the latter’s true, not so the former.
Is this common to others’ experience?
Seeing dementia up-close has prompted some reflection. I see a person with several deficits, the most striking is that they’re unable to perceive salience (what does this mean to me?). No perception is unimportant, everything is relevant. This makes them vulnerable to the modern internet, where websites are ad-funded thus attention-optimised. Unable to see something and think
“why am I being shown this? should I pay attention to it? hmm, no, not important”
One’s ability to perceive [what’s being said] alongside [intent behind the action of saying] helps us communicate more effectively with salience-perceptive people.
People less salience-perceptive are both harder to communicate with, and more vulnerable to manipulation.
Is this a well-represented trait in LW?
Random thoughts:
Is this like (a stronger version of) ADHD?
I wonder what % of ad sales actually goes to people like this.
Working memory deficit (symptom of ADHD) is similar and maybe sufficient. But this isn’t necessary: I’ve seen reduced/differently calibrated salience sense in autistic people as well.