I think I can enrich lesswrong with critical views on the sigularity. I have some strong arguments and even empirical evidence that there might be inherent complexity limits to technology and cognition which essentially render super intelligence infeasible (I see UFAI as a risk nonetheless).
Do go on...