MIRI has also done work on decision problems outside LDT’s fair problem class, like Open-Source Prisoner’s Dilemma. FairBot cooperates if it can prove you cooperate, defects otherwise. In this case, being too hard to predict gets you defected against.
Sure—this goes to my “equal-or-better opponent” description. Any interesting real-world agent is not provably cooperative (if it’s equal or more complex than you), or if it is, it’s exploitable by other agents that can prove it’s cooperation.
MIRI has also done work on decision problems outside LDT’s fair problem class, like Open-Source Prisoner’s Dilemma.
FairBot cooperates if it can prove you cooperate, defects otherwise. In this case, being too hard to predict gets you defected against.
Sure—this goes to my “equal-or-better opponent” description. Any interesting real-world agent is not provably cooperative (if it’s equal or more complex than you), or if it is, it’s exploitable by other agents that can prove it’s cooperation.