bhauth
we’re chugging along gaining victory points at a fairly slow but low-variance rate
...we are? What percentage full is the victory point meter? What happens when it fills up?
maybe this will help: https://www.google.com/search?site=&source=hp&q=nylon+6+crystal+structure&udm=2
For some limits of enzymes, see these posts of mine:
this has implications for the inter-strand hydrogen bonding that gives Kevlar its strength
What? It’s the same strength. It’s not like nylon 6 being weaker than nylon 66, the aromatic links are symmetric. Why would it be weaker?
The strongest spider silk is...weaker than Kevlar by density, but it’s comparable, and much more elastic. Not bad at all. And it can be spun from aqueous solutions, while Kevlar is spun in pure sulfuric acid. You also have to consider UV resistance here.
hack it enzymatically with Diels-Alder cycloaddition
Yeah, no.
If you’re interested in progress in aramid fibers, you could look up “5-(6)-amino-2-(4-aminobenzene)benzimidazole”.
Mostly because it doesn’t work? Because the analogies you’re assuming between big stocks and small decisions don’t apply? Big stocks have billions of dollars traded and reporting/auditing requirements backed by courts and banks and governments. Try drawing a line from how Dow stocks behave to how penny stocks behave, and then extrapolate way past there.
This post describes some of a problem—a system, a self-sustaining pattern in society—that I think is of central importance in America today and that I’ve thought about for a while.
When I was younger, I saw the entire pipeline ahead of me, and thought:
Much of this seems silly, and unnecessarily cruel. Surely there’s an alternative, a better system for people like me. I exist, so statistically speaking there should be (and should have been) enough people like me that they could work together and create such alternatives. I don’t know what they are yet, but it seems like they should exist.
As it turned out, that line of thinking was wrong.
While I stand by the overall point of this post, I wrote it quickly in response to ongoing events, and maybe it isn’t up to my usual standards for writing quality.
Should I take it down and try to write a better version?
Is it better to focus more on specific details of OpenAI, or on engineering ethics more generally and how past conclusions people made about that fit the current AI situation?
Did someone else already write a better version of this, or is someone going to?
I downvoted because:
-
It starts with an irrelevant AI image. Why? And then the “how hard is AI safety” image is embedded in the middle of unrelated text.
-
It conflates different things under a single word “safety”, eg:
We quickly learned that labs that prioritized speed captured the market as users actively revolted against overly preachy models. Safety shifted from an idealized differentiator to an impediment to market dominance.
While I could be wrong, that also seems like AI-edited text.
It has “12 months” in the title and then has no justification for that particular number.
-
I’m sure it’s possible to write a better version of this post. I hope someone does. Believe it or not, my specialty is engineering, not rhetoric.
My assessment of Sam Altman is that he’s a very good actor, very untrustworthy, and a nihilistic power-seeker who cares very little about benefit or harm to humanity as a whole. I agree that this post alone is only weak support for that assessment. A proper “compendium of reasons not to trust Sam Altman” would probably end up being a considerably longer post.
Because that’s a good way to lose every other agreement I’m part of. If that gets out once, everyone I’m dealing with needs to worry about whether they’ll be the second time. Even if they’ve seen me be trustworthy for years, if I decided they were evil, I might not just leave them in the lurch, but exploit every iota of the trust I’d earned to make them suffer and pay. Who wants to risk that?
Lots of people in leadership positions, from what I’ve seen! Altman. Trump. Bush and WMD in Iraq. Ronald Reagan and Iran-Contra. Lyndon Johnson and the Gulf of Tonkin. American CEOs do that pretty often!
From my point of view, you’re playing an iterated Prisoner’s Dilemma and committing to always cooperate even after the other person defects. OpenAI had a charter and a mission and a board with pro-humanity goals. Altman and his backers broke all of them. He therefore deserves the same in return.
OpenAI leadership broke that implicit contract first. It was originally supposed to be a philanthropic thing for the benefit of humanity. It was supposed to be “open”. Then it became for-profit, now it’s going to work on killer robots for the military. To whatever extent there’s an implicit contract like you describe, it would also apply to the work that people previously did for it under false pretenses!
I disagree entirely. You’re not considering the implications of his argument. There’s a reason why some dunks on him got over 10k likes.
He argued that AI energy usage is fine because all-in it’s less than the energy usage of equivalent humans. That comparison implies that any value of humans apart from their usefulness as workers that could be replaced by AI is negligible.
Your ethical framework here doesn’t seem consistent to me, but maybe you can explain how it works.
Uh, because I’m not a person who screws people over whenever it’s convenient to me.
Doing a job that harms people because you get paid is...also screwing over people because it’s convenient to you.
If it didn’t seem like a valuable option to them, presumably they wouldn’t spend a bunch of money and personal involvement (and a bit of bad optics) on having a bunker.
What I’m saying is, if everything goes wrong somehow, Sam Altman has the option of going to his bunker and playing video games or whatever indefinitely, and I think he wouldn’t feel particularly bad about what’s happening to people outside.
I don’t have the impression that Altman:
is the biological father
sees that kid as his
has been doing parenting
really cares at all
Unlike peptides or other complex organic molecules, lithium is very non-specific. If it’s present at levels high enough to noticeably inhibit a specific enzyme, it also inhibits lots of other stuff a similar amount.
I’d recommend being less credulous about papers on Alzheimer’s, considering how the incentives in that field have been.
Partly because it’s gotten total aid of ~2x its annual GDP in ~4 years. (Which has led to eg a Europe-wide transformer shortage from replacing Ukrainian transformers destroyed by Shaheds.)