Agent-foundations researcher. Working on Synthesizing Standalone World-Models, aiming at a timely technical solution to the AGI risk fit for worlds where alignment is punishingly hard and we only get one try.
Currently looking for additional funders ($1k+, details). Consider reaching out if you’re interested, or donating directly.
Or get me to pay you money ($5-$100) by spotting holes in my agenda or providing other useful information.
Per above, we’d need tighter feedback loops/quicker updates, appropriate markings of when content/procedures become outdated, some ability to compare various elements of constructed realities against the ground truth whenever the ground truth becomes known, etc. (Consider if Google Maps were updating very slowly, and also had some layer on top of its object-level observations whose representations relied on chains of inferences from the ground-true data but without quick ground-true feedback. That’d gradually migrate it to a fictional world as well.)
The general-purpose solution is probably some system that’d incentivize people to flag divergences from reality… Prediction markets?