We really fully believe that we will build AGI by 2027, and we will enact your plan, but we aren’t willing to take more than a 3-month delay
Well I ask what they are doing to make AGI.
Maybe I look at their AI plan and go “eurika”.
But if not.
Negative reinforcement by giving the AI large electric shocks when it gives a wrong answer. Hopefully big enough shocks to set the whole data center on fire. Implement a free bar for all their programmers, and encourage them to code while drunk. Add as many inscrutable bugs to the codebase as possible.
But, taking the question in the spirit it’s meant in.
Well I ask what they are doing to make AGI.
Maybe I look at their AI plan and go “eurika”.
But if not.
Negative reinforcement by giving the AI large electric shocks when it gives a wrong answer. Hopefully big enough shocks to set the whole data center on fire. Implement a free bar for all their programmers, and encourage them to code while drunk. Add as many inscrutable bugs to the codebase as possible.
But, taking the question in the spirit it’s meant in.
https://www.lesswrong.com/posts/zrxaihbHCgZpxuDJg/using-llm-s-for-ai-foundation-research-and-the-simple