I am Andrew Hyer, currently living in New Jersey and working in New York (in the finance industry).
aphyer
I am not much of an economist, but the two thoughts that spring to mind:
The change you want to see, of people not needing to do as much work, is in fact happening (even if not as fast as you might like). The first clean chart I could find for US data was here, showing a gradual fall since 1950 from ~2k hours/year to ~1760 hours/year worked. This may actually understate the amount of reduction in poverty-in-the-sense-of-needing-to-work-hard-at-an-unpleasant-job:
I think there has also been a trend towards these jobs being much nicer. The fact that what you’re referring to as a ‘miserable condition’ is working a retail job where customers sometimes yell at you, rather than working in the coal mines and getting blacklung, is a substantial improvement!
I think there has also been a trend towards the longest-hours-worked being for wealthier people rather than poorer people. “Banker’s hours” used to be an unusually short workday, which the wealthy bankers could get away with—while bankers still have a lot more money than poor people, I think there’s been a substantial shift in who works longer hours.
The change you want to see, viewed through the right lens, is actually somewhat depressing. I would phrase what you are looking for as a world where society has nothing to offer people that is nice enough they are willing to work an unpleasant job to produce it.
If you have the choice between ‘work long hours to get enough food to live’ or ‘work short hours and starve’, it makes sense to call that ‘poverty’. If you have the choice between ‘work long hours to be able to have a smartphone, internet, and cable TV’ or ‘work short hours, still have shelter, clothing and food, but not have as much nice stuff’, I would call that ‘work is producing nice enough stuff that people are willing to do the work to produce it’.
On your definition of ‘poverty’, Disneyland makes the world poorer. Every time someone takes on extra hours at work so they can take their kids to Disneyland, you account the unpleasant overtime work as an increase in poverty, and do not account the Disneyland trip on the other side of the ledger. This seems wrong.
D&D.Sci Scenario Index
Yeah, I have no idea. It would be much clearer if the contracts themselves were available. Obviously the incentive of the plaintiffs is to make this sound as serious as possible, and obviously the incentive of OpenAI is to make it sound as innocuous as possible. I don’t feel highly confident without more information, my gut is leaning towards ‘opportunistic plaintiffs hoping for a cut of one of the standard SEC settlements’ but I could easily be wrong.
EDITED TO ADD: On re-reading the letter, I’m not clear where the word ‘criminal’ even came from. The WaPo article claims
These agreements threatened employees with criminal prosecutions if they reported violations of law to federal authorities under trade secret laws, Kohn said.
but the letter does not contain the word ‘criminal’, its allegations are:
Non-disparagement clauses that failed to exempt disclosures of securities violations to the SEC;
Requiring prior consent from the company to disclose confidential information to federal authorities;
Confidentiality requirements with respect to agreements, that themselves contain securities violations;
Requiring employees to waive compensation that was intended by Congress to incentivize reporting and provide financial relief to whistleblowers.
‘Securities and Exchange Commission’ is like ‘Food and Drug Administration’: the FDA has authority over both food and drugs, not the intersection, and the SEC has authority over off-exchange securities.[1]
This authority tends to de facto extend to a fair level of general authority over the conduct of any company that issues a security (i.e. almost all of them). Matt Levine[2] calls this the ‘everything is securities fraud’ theory, since the theory “your company did Bad Thing X, it didn’t disclose Bad Thing X, some people invested in your company not knowing about Bad Thing X, then Bad Thing X came out and now your company is less valuable, victimizing the poor people who invested in you” has been applied in a rather large number of cases to penalize companies for a wide variety of conduct.
- ^
Some caveats may apply e.g. commodities exchanges are regulated by the CFTC. The SEC also probably cares a lot more about fraud in publicly traded companies, since they are less likely to be owned by sophisticated investors who can defend themselves against fraud and more likely to be owned by a large number of random people who can’t. I am not a lawyer, though. Get a real lawyer before making SEC jurisdictional arguments.
- ^
Levine is very, very much worth reading for sensible and often-amusing coverage of a wide variety of finance-adjacent topics.
- ^
‘A security’ is a much broader concept than ‘a publicly traded security’.
For example, if you’ve been following the SEC’s attempts to crack down on crypto, they are based on the SEC’s view that almost all cryptocurrencies, NFTs, stablecoins, etc. are ‘securities’. Whether you agree with them on that or not, the law broadly tends to back them in this.
This is no longer a question of ‘the SEC goes around fining firms whose confidentiality clauses fail to explicitly exempt statements to the SEC,’ which is totally a thing the SEC does, Matt Levine describes the trade as getting your employment contract, circling the confidentiality clause in red with the annotation “$” and sending it in as a whistleblower complaint. And yes, you get fined for that, but it’s more than a little ticky-tacky.
This is different. This is explicitly saying no to whistleblowing. That is not legal.
Are you sure about this interpretation?
(DISCLAIMER: I am not a lawyer at all, etc, etc.)
This came up on LW recently, and I didn’t find the letter convincing as being the second rather than the first situation.
Remember, the whole legal theory behind the SEC’s cases is that a clause like this:
I, EMPLOYEE, will not disclose EMPLOYER’S confidential information to anyone without EMPLOYER’S permission. If I do, I understand that EMPLOYER may pursue civil and criminal penalties, and if I receive compensation for disclosing such information I understand that EMPLOYER may recover said compensation from me in addition to any legal penalties.
if it doesn’t contain an explicit carveout for the SEC, is itself ‘a threat of criminal prosecution for whistleblowing’ and ‘a requirement to give up whistleblowing payments’.
If OpenAI’s NDAs actually contain a clause like ‘I will not tell the SEC anything, and if I do I may not receive whistleblower payments’, I agree that would be very bad, much worse than a failure-to-exempt-the-SEC problem.
But I think the letter sounds much more like the usual failure-to-exempt-the-SEC. Is there something I’m missing here?
Not a lawyer, but I think those are the same thing.
The SEC’s legal theory is that “non-disparagement clauses that failed to exempt disclosures of securities violations to the SEC” and “threats of prosecution if you report violations of law to federal authorities” are the same thing, and on reading the letter I can’t find any wrongdoing alleged or any investigation requested outside of issues with “OpenAI’s employment, severance, non-disparagement and non-disclosure agreements”.
Matt Levine is worth reading on this subject (also on many others).
The SEC has a history of taking aggressive positions on what an NDA can say (if your NDA does not explicitly have a carveout for ‘you can still say anything you want to the SEC’, they will argue that you’re trying to stop whistleblowers from talking to the SEC) and a reliable tendency to extract large fines and give a chunk of them to the whistleblowers.
This news might be better modeled as ‘OpenAI thought it was a Silicon Valley company, and tried to implement a Silicon Valley NDA, without consulting the kind of lawyers a finance company would have used for the past few years.’
(To be clear, this news might also be OpenAI having been doing something sinister. I have no evidence against that, and certainly they’ve done shady stuff before. But I don’t think this news is strong evidence of shadiness on its own).
(And I wonder which ghost your great-uncle is...perhaps we can get away with sending no exorcist at all to that one?)
Things about the dataset:
Each ghost statistic has a bimodal distribution, with one peak ~70 for ‘high’ stats and one ~30 for ‘low’ stats.
High stats correlate with other high stats: many ghosts have either all stats high or all stats low. This suggests a distinction between e.g. ‘Major’ spirits (which tend to have all stats high, but sometimes have a few low) and Minor spirits (vice versa).
Sliminess seems to be the stat most correlated with major/minor-ness: almost all Major spirits have high Sliminess, and almost all Minor spirits have low Sliminess. Hostility is the least correlated: Hostile Minor spirits, or non-Hostile Major spirits, both happen relatively often.
However, I haven’t yet been able to come up with anything clever to do with this, and ended up mostly just using a linear regression.
Results of my analysis:Most exorcists have one particular ghost stat that seems to primarily govern the difficulty they face:
The Phantom Pummelers really do not like Sliminess.
The Spectre Slayers really do not like Intellect.
The Wraith Wranglers really do not like Hostility.
The Demon Destroyers really do not like Grotesqueness (and also do better with low Hostility).
while some behave differently:
The Entity Eliminators seems to dislike all stats, especially Sliminess: perhaps they have a hard time with Major spirits and a relatively easy time with Minor ones?
The Mundanifying Mystics have a very high base rate, but actually charge slightly less for all stats—they are expensive in general, and get extra annoyed when you waste their time with Minor spirits?
We handle the idiosyncracies of hiring the various exorcists:Paying the Demon Destroyers to come seems worth it: they might actually save us 400sp in expectation just on spirit W alone.
The Spectre Slayers seem more valuable than the Entity Eliminators: while the Eliminators are all-around okay at minor spirits, with our knowledge of who is good against which stats we can always pick out a better exorcist to use, while the Spectre Slayers are a uniquely good bet for spirits like S that have very low INT but high other stats.
There are exactly three spirits where I think the Pummelers save us money (N, U, and a little bit on H), so we don’t need to fret about that constraint.
And we end up assigning (unless I find something else to do and change this):A: Spectre Slayers
B: Wraith Wranglers
C: Mundanifying Mystics
D: Demon Destroyers
E: Wraith Wranglers
F: Mundanifying Mystics
G: Demon Destroyers
H: Phantom Pummelers
I: Wraith Wranglers
J: Demon Destroyers
K: Mundanifying Mystics
L: Mundanifying Mystics
M: Spectre Slayers
N: Phantom Pummelers
O: Wraith Wranglers
P: Mundanifying Mystics
Q: Wraith Wranglers
R: Mundanifying Mystics
S: Spectre Slayers
T: Mundanifying Mystics
U: Phantom Pummelers
V: Demon Destroyers
W: Demon Destroyers
Edit after seeing simon’s answer:
We appear to have done pretty much the exact same things—identified the major/minor spirit distinction, not found anything to do with it, just fed the stats into a linear regression—and gotten the exact same answer.
I think FTX is somewhat relevant to my #4 (de facto cost of loaning money to prediction markets may be quite high), and to the comment thread around here about how accessible/usable various ways of working around US law to bet on prediction markets are. I don’t think it changes my opinion very substantially.
While I think these arguments are sufficient on their own (and am unimpressed with Michael’s arguments), there is one I think is missing:
‘Guilty’ can encapsulate a wide range of verdicts for a wide variety of crimes, and even a client who is guilty of something is not necessarily guilty of everything or deserving of maximal punishment.
A client who is unambiguously guilty of manslaughter can still deserve representation to defend them against a charge of murder.
This is also an area that requires expert support: most non-lawyers treat the phrases ‘robbing a house’ and ‘burgling a house’ interchangeably, but the first is a much more serious crime. Even ‘how should I address the judge to not offend him unnecessarily and worsen my sentence’ is something that a lawyer can legitimately help even a totally guilty client with.
I’m glad you liked it, thank you!
This entry was billed as “relatively simple”, but I think it was about median difficulty by the standards of D&D.Sci; pretty sure it was harder than (for example) The Sorceror’s Personal Shopper.
I guess that’s fair. There’s a complication here in that...uh...almost all of my scenarios have been above-median complexity and almost all of yours have been below-median. (I should probably write down my thoughts on this at some point). I agree that this one wasn’t simpler than most of yours, but I think that it was still a much more approachable entry point than e.g. Duels & D.Sci, or League of Defenders.
(It’s possible we should try to standardize a 1-10 complexity scale or some such so that we can stick a difficulty rating on the top of each scenario.)
“STORY (skippable)” was kind of misleading this time
Fair enough, I can tweak that for anyone who finds the scenario in future.
I intended that the story should not provide much help...the intent was not for players to notice that Anachronos was suspicious in-story, the intent was for them to notice from the data, and for the hints in the story to be just some quiet confirmation for a player who realized the twist from the data and then went back to reread the story.
On the other hand, I was expecting more players to get the twist, and thought that I’d only really catch players who ignored the ingredient names entirely and just fed the data into an ML algorithm, so I’m clearly not very well calibrated on this. I was really quite surprised by how many players analyzed the data well enough to say “Barkskin potion requires Crushed Onyx and Ground Bone, Necromantic Power Potion requires Beech Bark and Oaken Twigs” and then went on to say “this sounds reasonable, I have no further questions.” (Maybe the onyx-necromancy connection is more D&D lore than most players knew? But I thought that the bone-necromancy and bark-barkskin connections would be obvious even without that).
“Archmage Anachronos is trying to brew Barkskin Potion” was A) the GM saying something false directly to the players
I...think I’m in general allowed to say false things directly to the players as a D&D GM? If the Big Bad is disguised as your innkeeper while the real innkeeper is tied up in the cellar, I think I can say ‘The innkeeper tells you it’ll be six silver for a room’, I don’t think I need to say ‘The man who introduced himself to you as the innkeeper.’
(Also, you are a Data Scientist. Sense Motive is not a class skill for you. Clearly you failed a Sense Motive check and so believed him!)
...I’ll think about whether I want to tweak that line for potential future players.
D&D.Sci Alchemy: Archmage Anachronos and the Supply Chain Issues Evaluation & Ruleset
Are you sure you are reading the dataset correctly? In particular, which row number do you think shows him brewing together Crushed Onyx, Redwood Sap, and Vampire Fang to yield a Barkskin Potion? I suspect you may be looking at row 118078 (or 118079 if you include the header row), in which he did brew a Barkskin Potion and did use those three ingredients—but he also used a Giant’s Toe, Ground Bone, and Oaken Twigs. Are you seeing something different in the dataset?
EDIT: Never mind, looks like you caught this and edited in your comment. Sorry for the bother, just wanted to make sure I hadn’t screwed up the upload in some way.
D&D.Sci Alchemy: Archmage Anachronos and the Supply Chain Issues
I would accept the position ‘this question is not well-defined’. However, I don’t think I accept the position ‘actually an electron is bigger once we define things this way’.
(For one thing, I think that definition may imply that an electron is bigger than me?)
Also, I think this overall argument is a nitpick that is not particularly relevant to Scott’s article, unless you think that a large percentage of the respondents to that survey were quantum physicists.
The question is not comparing electrons to protons or neutrons, or even to atomic nuclei (in which case the electron has less mass but spread over a wider area, and you’re right that the answer seems likely to be that the electron is bigger).
It is comparing electrons to atoms, which contain multiple electrons as well as protons and neutrons.
Seconded. I feel like much more of what I’ve seen before has taken the form of “no, we’re not trying to target AI with ad-hoc changes to liability law/copyright, we’re just trying to consistently apply the rules that already apply to people,” which is rather in tension with this section.
The relevant figure wouldn’t be the current value so much as its derivative: I don’t know how that situation has changed over time, and haven’t put in the effort to dig up information on what that data looked like in 1950.