Hi all, new here.
I recently came across LessWrong (through ChatGPT—sorry...) while looking for places to have interesting and deeply intellectual conversations. I’ve been reading through some of the posts here and the guides to get a sense of how things work and it seems like this might be the place I was looking for.
To be honest I’m more psychologically minded than anything else; interested in how people form beliefs, the breakdown of reasoning, how biases form and stick, etc. I’m fortunate in that I’ve had a lot of exposure to academia from a pretty early age so I’ve kind of grown up with behavioral economics and skepticism and whatnot, but I’m hoping to get the opportunity to have discussions in more varied contexts.
I’m also curious to discuss where people see the link between psychology and AI. Intuitively it feels to me like there should be a lot of overlap between understanding human reasoning and building and interpreting AI systems (by AI I generally mean LLMs, but not exclusively).
I’m still new to the site so mostly trying to get a better feel, but wanted to say hi.
If there are any posts anyone thinks are especially good about these topics, I’d love to read them.
I haven’t had much exposure to this discussion so I might be missing something basic, but I am somewhat confused as to what would actually count as evidence here.
It seems that if someone shows behaviour like Allais or Ellsberg that we cand either say: “they’re violating independence” or “the outcomes weren’t specific richly enough.”
With this in mind, are there any possible patterns of choice(s) that would clearly count as a real violation, rather than just something that can be explained away by redefining outcomes?