I am Andrew Hyer, currently living in New Jersey and working in New York (in the finance industry).
aphyer
How many of those green lights could the Wright Brothers have shown you?
D&D Sci Coliseum: Arena of Data
You can correct it in the dataset going forward, but you shouldn’t go back and correct it historically. To see why, imagine this simplified world:
In 2000, GM had revenue of $1M, and its stock was worth in total $10M. Ford had revenue of $2M, and its stock was worth in total $20M. And Enron reported fake revenue of $3M, and its stock was worth in total $30M.
In 2001, the news of Enron’s fraud came out, and Enron’s stock dropped to zero. Also, our data vendor went back and corrected its 2000 revenue down to 0.
In 2002, I propose a trading strategy based on looking at a company’s revenue. I point to our historical data, where we see GM as having been worth 10x revenue, Ford as having been worth 10x revenue, and Enron as having been worth $30M on zero revenue. I suggest that I can perform better than the market average by just basing my investing on a company’s revenue data. This would have let me invest in Ford and GM, but avoid Enron! Hooray!
Of course, this is ridiculous. Investing based on revenue data would not have let me avoid losing money on Enron. Back in 2000, I would have seen the faked revenue data and invested...and in 2001, when the fraud came out, I would have lost money like everyone else.
But, by basing my backtest on historical data that has been corrected, I am smuggling the 2001 knowledge of Enron’s fraud back into 2000 and pretending that I could have used it to avoid investing in Enron in the first place.
If you care about having accurate tracking of the corrected ‘what was Enron’s real revenue back in 2000’ number, you can store that number somewhere. But by putting it in your historical data, you’re making it look like you had access to that number in 2000. Ideally you would want to distinguish between:
2000 revenue as we knew it in 2000.
2000 revenue as we knew it in 2001.
2001 revenue as we knew it in 2001.
but this requires a more complicated database.
One particularly perfidious example of this problem comes when incorrect data is ‘corrected’ to be more accurate.
A fictionalized conversation:
Data Vendor: We’ve heard that Enron [1]falsified their revenue data[2]. They claimed to make eleven trillion dollars last year, and we put that in our data at the time, but on closer examination their total revenue was six dollars and one Angolan Kwanza, worth one-tenth of a penny.
Me: Oh my! Thank you for letting us know.
DV: We’ve corrected Enron’s historical data in our database to reflect this upd-
Me: You what??
DV: W-we assumed that you would want corrected data...
Me: We absolutely do not want that! Do not correct it! Go back and...incorrect...the historical data immediately!
Success indeed, young Data Scientist! Archmage Anachronos thanks you for your aid, which will surely redound to the benefit of all humanity!
(hehehe)
LIES! (Edit: post did arrive, just late, accusation downgraded from LIES to EXCESSIVE OPTIMISM REGARDING TIMELINES)
MUWAHAHAHAHA! YOU FOOL!
ahem
That is to say, I’m glad to have you playing, I enjoy seeing solutions even after scenarios are finished. (And I think you’re being a bit hard on yourself, I think simon is the only one who actually independently noticed the trick.)
Petrov Day Tracker:
2019: Site did not go down
2020: Site went down deliberately
2021: Site did not go down
2022: Site went down both accidentally and deliberately
2023: Site did not go down[1]
2024: Site went down accidentally...EDIT: but not deliberately! Score is now tied at 2-2!
- ^
this scenario had no take-the-site-down option
I assume that this is primarily directed at me for this comment, but if so, I strongly disagree.
Security by obscurity does not in fact work well. I do not think it is realistic to hope that none of the ten generals look at the incentives they’ve been given and notice that their reward for nuking is 3x their penalty for being nuked. I do think it’s realistic to make sure it is common knowledge that the generals’ incentives are drastically misaligned with the citizens’ incentives, and to try to do something about that.
(Honestly I think that I disagree with almost all uses of the word ‘infohazard’ on LW. I enjoy SCP stories as much as the next LW-er, but I think that the real-world prevalence of infohazards is orders of magnitude lower).
Eeeesh. I know I’ve been calling for a reign of terror with heads on spikes and all that, but I think that seems like going a bit too far.
Yes, we’re working on aligning incentives upthread, but for some silly reason the admins don’t want us starting a reign of terror.
I have. I think that overall Les Mis is rather more favorable to revolutionaries than I am. For one thing, it wants us to ignore the fact that we know what will happen when Enjolras’s ideological successors eventually succeed, and that it will not be good.
(The fact that you’re using the word ‘watched’ makes me suspect that you may have seen the movie, which is honestly a large downgrade from the musical.)
During WWII, the CIA produced and distributed an entire manual (well worth reading) about how workers could conduct deniable sabotage in the German-occupied territories.
(11) General Interference with Organizations and Production
(a) Organizations and Conferences
Insist on doing everything through “channels.” Never permit short-cuts to be taken in order to expedite decisions.
Make speeches, talk as frequently as possible and at great length. Illustrate your points by long anecdotes and accounts of personal experiences. Never hesitate to make a few appropriate patriotic comments.
When possible, refer all matters to committees, for “further study and consideration.” Attempt to make the committees as large as possible—never less than five.
Bring up irrelevant issues as frequently as possible.
Haggle over precise wordings of communications, minutes, resolutions.
Accepting a governmental monopoly on violence for the sake of avoiding anarchy is valuable to the extent that the government is performing better than anarchy. This is usually true, but stops being true when the government starts trying to start a nuclear war.
If the designers of Petrov Day are allowed to offer arbitrary 1k-karma incentives to generals to nuke people, but the citizens are not allowed to impose their own incentives, that creates an obvious power issue. Surely ‘you randomly get +1k karma for nuking people’ is a larger moderation problem than ‘you get −1k karma for angering large numbers of other users’.
No, wait, that was the wrong way to put it...
Do you hear the people sing, singing the song of angry men
It is the music of a people who will not be nuked again
The next time some generals decide to blow us off the site
They will remember what we’ve done and will fear our might!
CITIZENS! YOU ARE BETRAYED!
Your foolish ‘leaders’ have given your generals an incentive scheme that encourages them to risk you being nuked for their glory.
I call on all citizens of EastWrong and WestWrong to commit to pursuing vengeance against their generals[1] if and only if your side ends up being nuked. Only thus can we align incentives among those who bear the power of life and death!
For freedom! For prosperity! And for not being nuked!
- ^
By mass-downvoting all their posts once their identities are revealed.
- Sep 26, 2024, 6:21 PM; 3 points) 's comment on [Completed] The 2024 Petrov Day Scenario by (
- ^
The best LW Petrov Day morals are the inadvertent ones. My favorite was 2022, when we learned that there is more to fear from poorly written code launching nukes by accident than from villains launching nukes deliberately. Perhaps this year we will learn something about the importance of designing reasonable prosocial incentives.
Why is the benefit of nuking to generals larger than the cost of nuking to the other side’s generals?
It is possible with precommitments under the current scheme for the two sides’ generals to agree to flip a coin, have the winning side nuke the losing side, and have the losing side not retaliate. In expectation, this gives the generals each (1000-300)/2 = +350 karma.
I don’t think that’s a realistic payoff matrix.
- Sep 26, 2024, 1:55 PM; 56 points) 's comment on [Completed] The 2024 Petrov Day Scenario by (
- Sep 26, 2024, 12:25 PM; 23 points) 's comment on [Completed] The 2024 Petrov Day Scenario by (
- Sep 26, 2024, 6:47 PM; 2 points) 's comment on [Completed] The 2024 Petrov Day Scenario by (
Your ‘accidents still happen’ link shows:
One airship accident worldwide in the past 5 years, in Brazil.
The last airship accident in the US was in 2017.
The last airship accident fatality anywhere in the world was in 2011 in Germany.
The last airship accident fatality in the US was in 1986.
I think that this compares favorably with very nearly everything.