Universities and colleges in the United States have a philosophy of education different from what I gather is common in Europe. Almost all schools require not just mastery in a chosen subject, but also breadth requirements; some number of credits in each of math, science, writing, and the humanities, or some other distributional system.
lalaithion
Meanwhile here at 5300 feet above sea level it consistently takes one to three minutes longer than the number on the packaging.
I mostly use LLMs for coding. Here’s the system prompt I have:
General programming principles:
put all configuration in global variables that I can edit, or in a single config file.
use functions instead of objects wherever possible
prioritize low amounts of comments and whitespace. Only include comments if they are necessary to understand the code because it is really complicated
prefer simple, straightline code to complex abstractions
use libraries instead of reimplementing things from scratch
look up documentation for APIs on the web instead of trying to remember things from scratch
write the program, reflect on its quality, simplicity, correctness, and ease of modification, and then go back and write a second version
I don’t post on LessWrong much but I would much rather be explicitly rate-limited than shadow-banned, if content I was posting needed to be moderated.
Perhaps there is a different scheme for dividing gains from coöperation which satisfies some of the things we want but not superadditivity, but I’m unfamiliar with one. Please let me know if you find anything in that vein, I’d love to read about some alternatives to Shapley Value.
I had a weird one today; I asked it to write a program for me, and it wrote one about the Golden Gate Bridge, and when I asked it why, it used the Russian word for “program” instead of the English word “program”, despite the rest of the response being entirely in English.
I don’t think the Elimination approach gives P(Heads|Awake) = 1⁄3 or P(Monday|Awake) = 2⁄3 in the Single Awakening problem. In that problem, there are 6 possibilities:
P(Heads&Monday) = 0.25
P(Heads&Tuesday) = 0.25
P(Tails&Monday&Woken) = 0.125
P(Tails&Monday&Sleeping) = 0.125
P(Tails&Tuesday&Woken) = 0.125
P(Tails&Tuesday&Sleeping) = 0.125
Therefore:
P(Heads|Awake)
= P(Heads&Monday) / (P(Heads&Monday) + P(Tails&Monday&Woken) + P(Tails&Tuesday&Woken))
= 0.5
And:
P(Monday|Awake)
= (P(Heads&Monday) + P(Tails&Monday&Woken)) / (P(Heads&Monday) + P(Tails&Monday&Woken) + P(Tails&Tuesday&Woken))
= 0.75
I also consider myself as someone who had—and still has—high hopes for rationality, and so I think it’s sad that we disagree, not on the object level, but on whether we can trust the community to faithfully report their beliefs. Sure, some of it may be political maneuvering, but I mostly think it’s political maneuvering of the form of—tailoring the words, metaphors, and style to a particular audience, and choosing to engage on particular issues, rather than outright lying about beliefs.
I don’t think I’m using “semantics” in a non-standard sense, but I may be using it in a more technical sense? I’m aware of certain terms which have different meanings inside of and outside of linguistics (such as “denotation”) and this may be one.
I owe you an apology; you’re right that you did not accuse me of violating norms, and I’m sorry for saying that you did. I only intended to draw parallels between your focus on the meta level and Zack’s focus on the meta level, and in my hurry I erred in painting you and him with the same brush.
I additionally want to clarify that I didn’t think you were accusing me of lying, but merely wanted preemptively close off some of the possible directions this conversation could go.
Thank you for providing those links! I did see some of them on his blog and skipped over them because I thought, based on the first paragraph or title, they were more intracommunity discourse. I have now read them all.
I found them mostly uninteresting. They focus a lot on semantics and on whether something is a lie or not, and neither of those are particularly motivating to me. Of the rest, they are focused on issues which I don’t find particularly relevant to my own personal journey, and while I wish that Zack felt like he was able to discuss these issues openly, I don’t really think people in the community disagreeing with him is some bizarre anti-truth political maneuvering.
I haven’t read everything Zack has written, so feel free to link me something, but almost everything I’ve read, including this post, includes far more intra-rationalist politicking than discussion of object level matters.
I know other people are interested in those things. I specifically phrased my previous post in an attempt to avoid arguing about what other people care about. I can neither defend nor explain their positions. Neither do I intend to dismiss or malign those preferences by labeling them semantics. That previous sentence is not to be read as a denial of ever labeling them semantics, but rather as a denial of thinking that semantics is anything to dismiss or malign. Semantics is a long and storied discipline on philosophy and linguistics. I took an entire college course on semantics. Nevertheless, I don’t find it particularly interesting.
I’ve read a human’s guide to words. I understand you cannot redefine reality by redefining words. I am trying to step past disagreement you and I might have regarding the definitions of words and figure out if we have disagreements about reality.
I think you are doing the same thing I have seen Zack do repeatedly, which is to avoid engaging in actual disagreement and discussion, but instead repeatedly accuse your interlocutor of violating norms of rational debate. So far nothing you have said is something I disagree with, except the implication that I disagree with it. If you think I’m lying to you, feel free to say so and we can stop talking. If our disagreement is merely “you think semantics is incredibly important and I find it mostly boring and stale”, let me know and you can go argue with someone who cares more than me.
But the way that Zack phrases things makes it sound, to me, like he and I have some actual disagreement about reality which he thinks is deeply important for people considering transition to know. And as someone considering transition, if you or he or someone else can say that or link to that isn’t full of semantics or intracommunity norms of discourse call-outs, I would like to see it!
Yeah, what factual question about empirical categories is/was Zack interested in resolving? Tabooing the words “man” and “woman”, since what I mean by semantics is “which categories get which label”. I’m not super interested in discussing which empirical category should be associated with the phonemes /mæn/, and I’m not super interested in the linguistic investigation of the way different groups of English speakers assign meaning to that sequence of phonemes, both of which I lump under the umbrella of semantics.
What factual question is/was Zack trying to figure out? “Is a woman” or “is a man” are pure semantics, and if that’s all there is then… okay… but presumably there’s something else?
I think this post could be really good, and perhaps there should be an effort to make this post as good as it can be. Right now I think it has a number of issues.
-
It’s too short. It moves very quickly past the important technical details, trusting the user to pick them up. I think it would be better if it was a bit longer and luxuriated on the important technical bits.
-
It is very physics-brained. Ideally we could get some math-literate non-physicists to go over this with help from a physicist to do a better job phrasing it in ways that are unfamiliar to non-physicists.
-
It should be published somewhere without part 2. Part 2 is intracommunity discourse, Part 1 is a great explainer, and I’d love to be able to link to it without part 2 as a consideration.
-
There are distributions which won’t approach a normal—Lévy distributions and Cauchy distributions are the most commonly known.
Yeah, to be clear I don’t have any information to suggest that the above is happening—I don’t work in EA circles—except for the fact that Ben said the EA ecosystem doesn’t have defenses against this happening, and that is one of the defenses I expect to exist.
Yeah, this post makes me wonder if there are non-abusive employers in EA who are nevertheless enabling abusers by normalizing behavior that makes abuse popular. Employers who pay their employees months late without clarity on why and what the plan is to get people paid eventually. Employers who employ people without writing things down, like how much people will get paid and when. Employers who try to enforce non-disclosure of work culture and pay.
None of the things above are necessarily dealbreakers in the right context or environment, but when an employer does those things they are making it difficult to distinguish themself from an abusive employer, and also enabling abusive employers because they’re not obviously doing something nonstandard. This is highlighted by:
I relatedly think that the EA ecosystem doesn’t have reliable defenses against such predators.
If EAs want to have defenses, against these predators, they have to act in such a way that the early red flags here (not paid on time, no contracts just verbal agreements) are actually serious red flags by having non-abusive employers categorically not engage in them, and having more established EA employees react in horror if they hear about this happening.
Find an area of the thing you want to do where quality matters to you less. Instead of trying to write the next great American novel, write fanfic[1]. Instead of trying to paint a masterpiece, buy a sketchbook and trace a bunch of stuff. Instead of trying to replace your dish-ware with handmade ceramics, see how many mugs you can make in an hour. Instead of trying to invent a new beautiful operating system in a new programming language, hack together a program for a one-off use case and then throw it away.
[1] not a diss to fanfic—but for me, at least, it’s easier to not worry about my writing quality when I do so
I think an important point missing from the discussion on compute is training vs inference: you can totally get a state-of-the-art language model performing inference on a laptop.
This is a slight point in favor of Yudkowsky: thinking is cheap, finding the right algorithm (including weights) is expensive. Right now we’re brute-forcing the discovery of this algorithm using a LOT of data, and maybe it’s impossible to do any better than brute-forcing. (Well, the human brain can do it, but I’ll ignore that.)
Could you run a LLM on a desktop from 2008? No. But, once the algorithm is “discovered” by a large computer it’s being run on consumer hardware instead of supercomputers, and I think that points towards Yudkowsky’s gesture at running AI on consumer hardware rather than Hanson’s gesture at Watson and other programs run on supercomputers.
If there really is no better way to find AI minds than brute-forcing the training of billions of parameters on a trillion tokens, then that points in the direction of Hanson, but I don’t really think that this would have been an important crux for either of them. (And I don’t really think that there aren’t more efficient ways of training.)
On the whole, I think this is more of a wash than a point for Hanson.
General principles of OSes and Networks is invaluable to basically everyone.
How programming languages, compilers, and interpreters work will help you master specific programming languages.
From this twitter thread by Jonathan Gorard, lightly edited by me https://x.com/getjonwithit/status/2009602836997505255?s=20
He also describes one such system a bit in https://x.com/getjonwithit/status/2010422931583860860?s=20
I think this is evidence that humans have a tendency to invent mathematical axioms that generate useful mathematics for the natural sciences, somehow.