Please ask any questions! We are more than happy to clarify our work, and explore potential avenues to improve it.
The lack of actionable ways to not only understand, but effectively improve model behavior toward alignment, is something that we believe is one of the most unsolved and overlooked problems in safety research today.
Arch223
Karma: 11
A Rational Proposal
Alignment may be localized: a short (and albeitly limited) experiment
Interpretability is the best path to alignment
Steering Vectors Can Help LLM Judges Detect Subtle Dishonesty
Arch223′s Shortform
A new version of rationalism is required as a counterweight to the traditional doomers or accelerationists.
No longer can the public perception of technology, culture, and ideas be restricted to the revolutionaries and conservatives. These lines have also become blurred in recent years.
The best example of this is the development of AI: the optimal path forward is not one in which the risk of a superior race of AI overlords rules us because of unrestricted development, nor one where the technology becomes concentrated in the hands of the powers that be.
Extremely late, but I actually agree.
I wonder the extent to which alignment faking is present in current preparedness frameworks. One of my beliefs is that a better degree of interpretability can help us understand why models engage in such behavior, but yes, it probably does not get us to a solution (so far).