[Question] Research ideas (AI Interpretability & Neurosciences) for a 2-months project

My university (EPFL) organizes a “Summer in the Lab” internship for interested Bachelor students. The idea is to send them to a lab for a ~2-months period, so they can begin engaging with the research community and develop research skills. I have been selected for the internship and the Alexander Mathis Lab accepted me. They use a mix of neuroscience and AI/​ML in their research. (e.g. their most famous paper: DeepLabCut: markerless pose estimation of user-defined body parts with deep learning)

As I am interested in studying AI interpretability later in my career, I am now looking for project ideas that combine this with neuroscience. I read a bit of research in this area, such as what is depicted in the Intro to brain-like AGI post series, or the Shard theory.

My question is: which ideas (in the theories above or any others) you think are worth diving into for this 2-months research internship ? What could help advance towards a safer future (even though it is at best a very small step)?

No comments.