I didn’t say it was the only effective funding mechanism. I didn’t say it was the best. Please respond to the argument I actually made.
Strange7
“Modern-day best-practices industrial engineering works pretty well at it’s stated goals, and motivates theoretical progress as a result of subgoals” is not a particularly controversial claim. If you think there’s a way to do more with less, or somehow immunize the market for pure research against adverse selection due to frauds and crackpots, feel free to prove it.
if you want research, buy research
Focusing money too closely on the research itself runs the risk that you’ll end up paying for a lot of hot air dressed up to look like research. Cool-but-useless real-world applications are the costly signalling mechanism which demonstrates an underlying theory’s validity to nonspecialists. You can’t fly to the moon by tacking more and more epicycles onto the crystalline-sphere theory of celestial mechanics.
why did evolution build us to place fundamental emotional and normative value on conforming to what any rational selfish agent will figure out?
If you’re running some calculation involving a lot of logarithms, and portable electronics haven’t been invented yet, would you rather take a week to derive the exact answer with an abacus, and another three weeks hunting down a boneheaded sign error, or ten seconds for the first two or three decimal places on a slide rule?
Rational selfishness is expensive to set up, expensive to run, and can break down catastrophically at the worst possible times. Evolution tends to prefer error-tolerant systems.
problems other than math and science problems
No such thing.
For any given problem, once a possible solution is reached, do you expect to be able to check that solution against reality with further observations? If so, you have constructed a theory with experimental implications, and are doing Science. If not, you have derived the truth, falsehood, or invalidity of a particular statement from a core set of axioms, and are doing Math.
Would it have fit into less space than the set of possible programs for the Z80?
What about Honda?
For the play money iterations, that assumption would not hold.
Why not? People can get pretty competitive even when there’s nothing really at stake, and current-iteration play money is a proxy for future-iteration real money.
Thank you! If I was the other clone and heard that I was about to play a game of PD which would have no consequences for anyone except the other player, who was also me, that would distort my incentives.
fighting the hypothetical
It’s established in the problem statement that the experimenter is going to destroy or falsify all records of what transpired during the game, including the fact that a game even took place, presumably to rule out cooperation motivated by reputational effects. If you want a perfectly honest and trustworthy experimenter, establish that axiomatically, or at least don’t establish anything that directly contradicts.
Assuming that the other party is a clone with identical starting mind-state makes it a much more tractable problem. I don’t have much idea how perfect reasoners behave; I’ve never met one.
Would it be a valid instructional technique to give someone (particularly someone congenitally incapable of learning any other way) the opportunity to try out a few iterations of the ‘game’ Omega is offering, with clearly denominated but strategically worthless play money in place of the actual rewards?
You find yourself in a PD against a perfect copy of yourself. At the end of the game, I will remove the money your clone wins, destroy all records of what you did, re-merge you with your clone, erase both our memories of the process, and let you keep the money that you won (you will think it is just a gift to recompense you for sleeping in my lab for a few hours). You had not previously considered this situation possible, and had made no precommitments about what to do in such a scenario. What do you think you should do?
Given that you’re going to erase my memory of this conversation and burn a lot of other records afterward, it’s entirely possible that you’re lying about whether it’s me or the other me whose payout ‘actually counts.’ Makes no difference to you either way, right? We all look the same, and telling us different stories about the upcoming game would break the assumption of symmetry. Effectively, I’m playing a game of PD followed by a special step in which you flip a fair coin and, on heads, swap my reward with that of the other player.
So, I’d optimize for the combined reward to both myself and my clone, which is to say, for the usual PD payoff matrix, cooperate. If the reward for defecting when the other player cooperates is going to be worth drastically more to my postgame gestalt, to the point that I’d accept a 25% or less chance of that payout in trade for virtual certainty of the payout for mutual cooperation, I would instead behave randomly.
Kelp and fish can be farmed.
A useful concept here (which I picked up from a pro player of Magic: The Gathering, but exists in many other environments) is “board state.” A lot of the research I’ve seen in game theory deals with very simple games, only a handful of decision-points followed by a payout. How much research has there been about games where there are variables (like capital investments, or troop positions, or land which can be sown with different plants or left fallow), which can be manipulated by the players and whose values affect the relative payoffs of different strategies?
Altruism can be more than just directly aiding someone you personally like; there’s also manipulating the environment to favor your preferred strategy in the long term, which costs you resources in the short term but benefits everyone who uses the same strategy as you, including your natural allies.
The king did, however, count on the Jester’s assumption that the content of the boxes could be deduced from the inscriptions.
What’s your point? I’ve already acknowledged that this metric doesn’t return equally low values for all inanimate objects, and it seems a bit more common (in new-agey circles at least) to ascribe intelligence to crystals or rivers than to puffs of hot gas, so in that regard it’s better calibrated to human intuition than Integrated Information Theory.
Again, I think you’re misunderstanding. The metric I’m proposing doesn’t measure how well those self-maintenance systems work, only how many of them there are.
Yes, of course we’re only really interested in some aspects of self-maintenance. Let’s start by counting how many aspects there are, and start categorizing once that first step has produced some hard numbers.
Even if it’s useless for philosophy of consciousness, some generalized scale of “how self-maintaining is this thing” might be a handy tool for engineers. That’s the difference between a safe, mostly passive expert system and a world-devouring paperclip maximizer, isn’t it? Google Maps doesn’t try to reach out and eliminate potential threats on it’s own initiative.
What is it that makes consciousness, or the thing that it points to (if such a thing is not ephemeral), important?
I am not in a position to speculate as to why consciousness, or the underlying referent thereto, is so widely considered important; I simply observe that it is. Similarly, I wouldn’t feel qualified to say why a human life has value, but for policy purposes, somebody out there needs to figure out how many million dollars of value a statistical human life is equivalent to. Might as well poke at the math of that, maybe make it a little more rigorous and generalized.
‘Breakthroughs and basic science’ seem to be running in to diminishing returns lately. As a policy matter, I think we (human civilization) should focus more on applying what we already know about the basics, to do what we’re already doing more efficiently.