This can be modeled as a conversation with readers, where the reader prompts the writer to taking the next step on the list.
Claim ought to be supported with reasons. Reasons ought to be based on evidence. Arguments are recursive: a part of an argument is an acknowledgment of an anticipated response, and another argument addresses that response. Finally, when the distance between a claim and a reason grows large, we draw connections with something called warrants.
The logic of warrants proceeds in generalities and instances. A general circumstance predictably leads to a general consequence, and if you have an instance of the circumstance you can infer an instance of the consequence.
Arguing in real life papers is complexified from the 5 steps, because
Claims should be supported by two or more reasons
A writer can anticipate and address numerous responses.
As I mentioned, arguments are recursive, especially in the anticipated response stage, but also each reason and warrant can necessitate a subargument.
You might embrace a claim too early, perhaps even before you have done much research, because you “know” you can prove it. But falling back on that kind of certainty will just keep you from doing your best thinking.
Getting the easy things right shows respect for your readers and is the best training for dealing with the hard things.
If they don’t believe the evidence, they’ll reject the reasons and, with them, your claim.
We saw previously that claims ought to be supported with reasons, and reasons ought to be based on evidence. Now we will look closer at reasons and evidence.
Reasons must be in a clear, logical order. Atomically, readers need to buy each of your reasons, but compositionally they need to buy your logic. Storyboarding is a useful technique for arranging reasons into a logical order: physical arrangements of index cards, or some DAG-like syntax. Here, you can list evidence you have for each reason or, if you’re speculating, list the kind of evidence you would need.
When storyboarding, you want to read out the top level reasons as a composite entity without looking at the details (evidence), because you want to make sure the high-level logic makes sense.
Readers will not accept a reason until they see it anchored in what they consider to be a bedrock of established fact. … To count as evidence, a statement must report something that readers agree not to question, at least for the purposes of the argument. But if they do question it, what you think is hard factual evidence is for them only a reason, and you have not yet reached that bedrock of evidence on which your argument must rest.
I think there is a contract between you and the reader. You must agree to cite sources that are plausibly truthful, and your reader must agree to accept that these sources are reliable. A diligent and well-meaning reader can always second-guess whether, for instance, the beureau of subject matter statistics is collecting and reporting data correctly, but at a certain point this violates the social contract. If they’re genuinely curious or concerned, it may fall on them to investigate the source, not on you. The bar you need to meet is that your sources are plausibly trustworthy. The book doesn’t talk much about this contract, so there’s little I can say about what “plausible” means.
Sometimes you have to be extra careful to distinguish reasons from evidence, a (<claim>, <reason>, <evidence>) tuple is subject to regress in the latter two components, (A, B, C) may need to be justified by (B, C, D) and so on. The example given of this regress is if I told you (american higher education must curb escalating tuition costs, because the price of college is becoming an impediment to the american dream, today a majority of students leave college with a crushing debt burden). In the context of this sentence, “a majority of students...” is evidence, but it would be reasonable to ask for more specifics. In principle, any time information is compressed it may be reasonable to ask for more specifics. A new tuple might look like (the price of college is becoming an impediment to the american dream, because today a majority of students leave college with a crushing debt burden, in 2013 nearly 70% of students borrowed money for college with loans averaging $30000...). The third component is still compressing information, but it’s not in the contract between you and the reader for the reader to demand the raw spreadsheet, so this second tuple might be a reasonable stopping point of the regress.
If you can imagine readers plausibly asking, not once but many times, how do you know that? What facts make it true?, you have not yet reached what readers want—a bedrock of uncontested evidence.
Sometimes you have to be careful to distinguish evidence from reports of it. Again, because we are necessarily dealing with compressed information, we can’t often point directly to evidence. Even a spreadsheet, rather than summary statistics of it, is a compression of the phenomena in base reality that it tracks.
data you take from a source have invariably been shaped by that source, not to misrepresent them, but to put them in a form that serves that source’s ends. … when you in turn report those data as your own evidence, you cannot avoid manipulating them once again, at least by putting them in a new context.
There is a criteria you want to screen your evidence with respect to.
sufficient
representative
accurate
precise
authoritative
Being honest about the reliability and prospective accuracy of evidence is always a positive signal. Evidence can be either too precise or not precise enough. The women in one or two of Shakespeare’s plays do not represent all his women, they are not representative. Figure out what sorts of authority signals are considered credible in your community, and seek to emulate them.
Primary sources provide you with the “raw data” or evidence you will use to develop, test, and ultimately justify your hypothesis or claim.
Secondary sources are books, articles, or reports that are based on primary sources and are intended for scholarly or professional audiences.
Tertiary sources are books and articles that synthesize and report on secondary sources for general readers, such as textbooks, articles in encyclopedias, and articles in mass-circulation publications.
The distinction between primary and secondary sources comes from 19th century historians, and the idea of tertiary sources came later. The boundaries can be fuzzy, and are certainly dependent on the task at hand.
I want to reason about what these distinctions look like in the alignment community, and whether or not they’re important.
The rest of chapter five is about how to use libraries and information technologies, and evaluating sources for relevance and reliability.
Chapter 6 starts off with the kind of thing you should be looking for while you read
Look for creative agreement
Offer additional support. You can offer new evidence to support a source’s claim.
Confirm unsupported claims. You can prove something that a source only assumes or speculates about.
Apply a claim more widely. You can extend a position.
Look for creative disagreement
Contradictions of kind. A source says something is one kind of thing, but it’s another.
Part-whole contradictions. You can show that a source mistakes how the parts of something are related.
Developmental or historical contradictions. You can show that a source mistakes the origin or development of a topic.
External cause-effect contradictions. You can show that a source mistakes a causal relationship.
Contradictions of perspective. Most contradictions don’t change a conceptual framework, but when you contradict a “standard” view of things, you urge others to think in a new way.
The rest of chapter 6 is a few more notes about what you’re looking for while reading (evidence, reasons), how to take notes, and how to stay organized while doing this.
The alignment community
I think I see the creative agreement modes and the creative disagreement modes floating around in posts. Would it be more helpful if writers decided on one or two of these modes before sitting down to write?
Moreover, what is a primary source in the alignment community? Surely if one is writing about inner alignment, a primary source is the Risks from Learned Optimization paper. But what are Risks’ primary, secondary, tertiary sources? Does it matter?
Now look at Arbital. Arbital started off to be a tertiary source, but articles that seemed more like primary sources started appearing there. I remember distinctively thinking “what’s up with that?” it struck me as awkward for Arbital to change it’s identity like that, but I end up thinking about and citing the articles that seem more like primary sources.
There’s also the problem of stuff in the memeplex not written down is the real “primary” source while the first person who happens to write it down looks like they’re writing a primary source when in fact what they’re doing is really more like writing a secondary or even tertiary source.
Yesterday I quit my job for direct work on epistemic public goods! Day one of direct work trial offer is April 4th, and it’ll take 6 weeks after that to know if I’m a fulltime hire.
I’m turning down
raise to 200k/yr usd
building lots of skills and career capital that would give me immense job security in worlds where investment into one particular blockchain doesn’t go entirely to zero
having fun on the technical challenges
for
confluence of my skillset and a theory of change that could pay huge dividends in the epistemic public goods space
0.35x paycut from my upcoming raise
uncertainty of it being a trial offer.
having fun on the technical challenges
Which I’m flagging in such detail to give you strength if you’re ever reasoning about your risk tolerance and your goals, just remember, “look at what quinn did!”
I think a property of my theory of change is that academic and commercial speed is a bottleneck. I recently realized that my mass assignment for timelines synchronized with my mass assignment for the prosaic/nonprosaic axis. The basic idea is that let’s say a radical new paper that blows up and supplants the entire optimization literature gets pushed to the arxiv tomorrow, signaling the start of some paradigm that we would call nonprosaic. The lag time for academics and industry to figure out what’s going on, figure out how to build on that result, for developer ecosystems to form, would all compound to take us outside of what we would call “short timelines”.
The reasoning assumes that ideas are first generated in academia and don’t arise inside of companies. With DeepMind outperforming the academic protein folding community when protein folding isn’t even the main focus of DeepMind I consider it plausible that new approaches arise within a company and get only released publically when they are strong enough to have an effect.
Even if there’s a paper most radical new papers get ignored by most people and it might be that in the beginning only one company takes the idea seriously and doesn’t talk about it publically to keep a competive edge.
That’s totally fair, but I have a wild guess that the pipeline from google brain to google products is pretty nontrivial to traverse, and not wholly unlike the pipeline from arxiv to product.
Like, AlexNet was 2012, DeepMind patented deep Q learning in 2014, the first TensorFlow release was 2015, the first PyTorch release was 2016, the first TPU was 2016, and by 2019 we had billion-parameter GPT-2 …
So if you say “Short is ≤2 years”, then yeah, I agree. If you say “Short is ≤8 years”, I think I’d disagree, I think 8 years might be plenty for a non-prosaic approach. (I think there are a lot of people for whom AGI in 15-20 years still counts as “short timelines”. Depends on who you’re talking to, I guess.)
I should’ve mentioned in OP but I was lowkey thinking upper bound on “short” would be 10 years.
I think developer ecosystems are incredibly slow (longer than ten years for a new PL to gain penetration, for instance). I guess under a singleton “one company drives TAI on its own” scenario this doesn’t matter, because tooling tailored for a few teams internal to the same company is enough which can move faster than a proper developer ecosystem. But under a CAIS-like scenario there would need to be a mature developer ecosystem, so that there could be competition.
I feel like 7 years from AlexNet to the world of PyTorch, TPUs, tons of ML MOOCs, billion-parameter models, etc. is strong evidence against what you’re saying, right? Or were deep neural nets already a big and hot and active ecosystem even before AlexNet, more than I realize? (I wasn’t paying attention at the time.)
Moreover, even if not all the infrastructure of deep neural nets transfers to a new family of ML algorithms, much of it will. For example, the building up of people and money in ML, the building up of GPU / ASIC servers and the tools to use them, the normalization of the idea that it’s reasonable to invest millions of dollars to train one model and to fab ASICs tailored to a particular ML algorithm, the proliferation of expertise related to parallelization and hardware-acceleration, etc. So if it took 7 years from AlexNet to smooth turnkey industrial-scale deep neural nets and billion-parameter models and zillions of people trained to use them, then I think we can guess <7 years to get from a different family of learning algorithms to the analogous situation. Right? Or where do you disagree?
No you’re right. I think I’m updating toward thinking there’s a region of nonprosaic short-timelines universes. Overall it still seems like that region is relatively much smaller than prosaic short-timelines and nonprosaic long-timelines, though.
I asked a friend whether I should TA for a codeschool called ${{codeschool}}.
You shouldn’t hang around ${{codeschool}}. People at ${{codeschool}} are not pursuing excellence.
A hidden claim there that I would soak up the pursuit of non-excellence by proximity or osmosis isn’t what’s interesting (though I could see that turning out either way). What’s interesting is the value of non-excellence, which I’ll call adequacy.
${{codeschool}} in this case is effective and impactful at putting butts in seats at companies, and is thereby responsible for some negligible slice of economic growth. It’s students and instructors are plentiful with the virtue of getting things done, do they really need the virtue of high-craftsmanship? The student who reads SICP and TAPL because they’re pursuing mastery over the very nature of computation is strictly less valuable to the economy than the student who reads react tutorials because they’re pursuing some cash.
Obviously, my friend who was telling me this was of the SICP/TAPL type. In software, this is problematic: lisp and type theory will increase your thinking about the nature of computation, but will it increase your thinking about the social problem of steering a team? From an employer’s perspective, it is naive to prefer excellence over adequacy, it is much wiser to saddle the excellent person with the burden of proving that they won’t get bored easily.
Hufflepuffs can go far, and the fuel is adequacy. Enough competence to get it done, any more is egotistical, a sunk cost.
But what if it’s not about industry/markets, what if it’s about the world’s biggest problems? Don’t we want people who are more competent than strictly necessary to be working on them? Maybe, maybe not.
For a long time I’ve operated in the excellence mindset: more energy for struggling with textbooks than for exploiting the skills I already have to ship projects and participate in the real world. Thinking it might be good to shift gears and flex my hufflepuff virtues more.
The student who reads SICP and TAPL because they’re pursuing mastery over the very nature of computation is strictly less valuable to the economy than the student who reads react tutorials because they’re pursuing some cash.
Seems to me that on the market there are very few jobs for the SICP types.
The more meta something is, the less of that is needed. If you can design an interactive website, there are thousands of job opportunities for you, because thousands of companies want an interactive website, and somehow they are willing to pay for reinventing the wheel. If you can design a new programming language and write a compiler for it… well, it seems that world already has too many different programming languages, but sure there is a place for maybe a dozen more. The probability of success is very small even if you are a genius.
The best opportunity for developers who think too meta is probably to design a new library for an already popular programming language, and hope it becomes popular. The question is how exactly you plan to get paid for that.
Probably another problem is that it requires intelligence to recognize intelligence, and it requires expertise to recognize expertise. The SICP type developer seems to most potential employers and most potential colleagues as… just another developer. The company does not see individual output, only team output; it does not matter that your part of code does not contain bugs, if the project as a whole does. You cannot use solutions that are too abstract for your colleagues, or for your managers. Companies value replaceability, because it is less fragile and helps to keep developer salaries lower than they might be otherwise. (In theory, you could have a team full of SICP type developers, which would allow them to work smarter, and yet the company would feel safe. In practice, companies can’t recognize this type and don’t appreciate it, so this is not going to happen.)
Again, probably the best position for a SICP type developer in a company would be to develop some library that the rest of the company would use. That is, a subproject of a limited size that the developer can do alone, so they are not limited in the techniques they use, as long as the API is comprehensible. Ah, but before you are given such opportunity, you usually have to prove yourself in the opposite type of work.
Sometimes I feel like having a university for software developers just makes them overqualified for the market. A vocational school focusing on the current IT hype would probably make most companies more happy. Also the developers, though probably only in short term, before a new hype comes and they face the competition of a new batch of vocational school graduates trained for the new hype. A possible solution for the vocational school would be to also offer retraining courses for their former students, like three or six months to become familiar with the new hype.
Rats and EAs should help with the sanity levels in other communities
Consider politics. You should take your political preferences/aesthetics, go to the tribes that are based on them, and help them be more sane. In the politics example, everyone’s favorite tribe has failure modes, and it is sort of the responsibility of the clearest-headed members of that tribe to make sure that those failure modes don’t become the dominant force of that tribe.
Speaking for myself, having been deeply in an activist tribe before I was a rat/EA, I regret I wasn’t there to help the value-aligned and clear-headed over the last few years while some of that tribe’s worst pathologies made gains. Now it seems almost too late for them.
Actionably, I want you to
Write for journals, forums, blogospheres, zines outside of rat and EA.
Dump time into tribes that might not be the state of the art in sanity, find the most sane people there, and find ways to support them.
I speak not (well, not entirely) from my cognitive dissonance at having abandoned an aesthetic I still have feelings for. I think
Tribes besides ours are what make up the overall sanity waterline
It’s ok to set aside humility and imposter syndrome and say “I can actionably be a resource of sanity for someone else”, even tho you personally think you have a lot of work to do at getting less wrong yourself. I would say the opposite of the “affix your mask before helping others” comic strip: find synergies between mentoring others in the art and continuing to master the art yourself.
We basically want every tribe to believe true things and think clearly about their values. Yes, I’m obviously concerned that this will lead to some of my fellow rats taking my advice, applying it to a political aesthetic I find barbaric, and helping that political aesthetic win—I think this concern is basically fine because on net I expect more true beliefs and clear thinking about values to make the meaning of winning for each tribe converge on something that isn’t zero-sum.
I should also mention that I expect an externality from this effort to be an increase in the intrarat / intraEA intellectual diversity.
Broadly, the two kinds of claims are conceptual and practical.
Conceptual claims ask readers not to ask, but to understand. The flavors of conceptual claim are as follows:
Claims of fact or existence
Claims of definition and classification
Claims of cause and consequence
Claims of evaluation or appraisal
There’s essentially one flavor of practical claim
Claims of action or policy.
If you read between the lines, you might notice that a kind of claim of fact or cause/consequence is that a policy works or doesn’t work to bring about some end. In this case, we see that practical claims deal in ought or should. There is a difference, perhaps subtle perhaps not, between “X brings about Y” and “to get Y we ought to X”.
Readers expect a claim to be specific and significant. You can evaluate your claim along these two axes.
To make a claim specific, you can use precise language and explicit logic. Usually, precision comes at the cost of a higher word count. To gain explicitness, use words like “although” and “because”. Note some fields might differ in norms.
You can think of significance of a claim as the quantity it asks readers to change their mind, or I suppose even behavior.
While we can’t quantify significance, we can roughly estimate it: if readers accept a claim, how many other beliefs must they change?
Avoid arrogance.
As paradoxical as it seems, you make your argument stronger and more credible by modestly acknowledging its limits.
Two ways of avoiding arrogance are acknowledging limiting conditions and using hedges to limit certainty.
Don’t run aground: there are innumerable caveats that you could think of, so it’s important to limit yourself only to the most relevant ones or the ones that readers would most plausibly think of. Limiting certainty with hedging is given by example of Watson and Crick, publishing what would become a high-impact result, “We wish to suggest … in our opinion … we believe … Some … appear”
without the hedges, Crick and Watson would be more concise but more aggressive.
In most fields, readers distrust flatfooted certainty
It is not obvious how to walk the line between hedging too little and hedging too much.
It is not obvious how to walk the line between hedging too little and hedging too much.
This may be context-dependent. Different countries probably have different cultural norms. Norms may differ for higher-status and lower-status speakers. Humble speech may impress some people, but others may perceive it as a sign of weakness. Also, is your audience fellow scientists or are you writing a popular science book? (More hedging for the former, less hedging for the latter.)
notes (from a very jr researcher) on alignment training pipeline
Training for alignment research is one part competence (at math, cs, philosophy) and another part having an inside view / gears-level model of the actual problem. Competence can be outsourced to universities and independent study, but inside view / gears-level model of the actual problem requires community support.
A background assumption I’m working with is that training as a longtermist is not always synchronized with legible-to-academia training. It might be the case that jr researchers ought to publication-maximize for a period of time even if it’s at the expense of their training. This does not mean that training as a longtermist is always or even often orthogonal to legible-to-academia training, it can be highly synchronized, but it depends on the occasion.
It’s common to query what relative ratio should be assigned to competence building (textbooks, exercises) vs. understanding the literature (reading papers and alignment forum), but perhaps there is a third category- honing your threat model and theory of change.
I spoke with a sr researcher recently who roughly said that a threat model with a theory of change is almost sufficient for an inside view / gears-level model. I’m working from the theory that honed threat models and your theory of change are important to calculate interventions. See Alice and Bob in Rohin’s faq.
I’ve been trying by doing exercises with a group of peers weekly to hone my inside view / gears-level model of the actual problem. But the sr researcher i spoke to said mentorship trees of 1:1 time, not exercises that jrs can just do independently or in groups, is the only way it can happen. This is troublesome to me, as the bottleneck becomes mentors’ time. I’m not so much worried about the hopefully merit-based process of mentors figuring out who’s worth their time, as I am about the overall throughput. It gets worse though- what if the process is credentialist?
Take a look at the Critch quote from the top of Rohin’s faq:
I get a lot of emails from folks with strong math backgrounds (mostly, PhD students in math at top schools) who are looking to transition to working on AI alignment / AI x-risk.
Is he implicitly saying that he offloads some of the filtering work to admissions people at top schools? Presumably people from non-top schools are also emailing him, but he doesn’t mention them.
I’d like to see a claim that admissions people at top schools are trustworthy. No one has argued this to my knowledge. I think sometimes the movement falls back on status games, unless there is some intrinsic benefit to “top schools” (besides building social power/capital) that everyone is aware of. (Indeed if someone’s argument is that they identified a lever that requires a lot of social power/capital, then they can maybe put that top school on their resume to use, but if the lever is strictly high quality useful research (instead of say steering a federal government) this doesn’t seem to apply).
Is he implicitly saying that he offloads some of the filtering work to admissions people at top schools?
I don’t think Critch’s saying that the best way to get his attention is through cold emails backed up by credentials. The whole post is about him not using that as a filter to decide who’s worth his time but that people should create good technical writing to get attention.
Critch’s written somewhere that if you can get into UC Berkeley, he’ll automatically allow you to become his student, because getting into UC Berkeley is a good enough filter.
Where did he say that? Given that he’s working at UC Berkeley I would expect him to treat UC Berkeley students preferentially for reasons that aren’t just about UC Berkeley being able to filter.
It’s natural that you can sign up for one of the classes he teaches at UC Berkeley by being a student of UC Berkeley.
Being enrolled into MIT might be just as hard as being enrolled into UC Berkeley but it doesn’t give you the same access to courses taught at UC Berkeley by it’s faculty.
If you get into one of the following programs at Berkeley:
a PhD program in computer science, mathematics, logic, or statistics, or
a postdoc specializing in cognitive science, cybersecurity, economics, evolutionary biology, mechanism design, neuroscience, or moral philosophy,
… then I will personally help you find an advisor who is supportive of you researching AI alignment, and introduce you to other researchers in Berkeley with related interests.
and also
While my time is fairly limited, I care a lot about this field, and you getting into Berkeley is a reasonable filter for taking time away from my own research to help you kickstart yours.
Methods, famously, includes the line “I am a descendant of the line of Bacon”, tracing empiricism to either Roger (13th century) or Francis (16th century) (unclear which).
Though a cursory wikiing shows an 11th century figure providing precedents for empiricism! Alhazen or Ibn al-Haytham worked mostly optics apparently but had some meta-level writings about the scientific method itself. I found this shockingly excellent quote
The duty of the man who investigates the writings of scientists, if learning the truth is his goal, is to make himself an enemy of all that he reads, and … attack it from every side. He should also suspect himself as he performs his critical examination of it, so that he may avoid falling into either prejudice or leniency.
Should we do more to celebrate Alhazen as an early rationalist?
New discord server dedicated to multi-multi delegation research
DM me for invite if you’re at all interested in multipolar scenarios, cooperative AI, ARCHES, social applications & governance, computational social choice, heterogeneous takeoff, etc.
(side note I’m also working on figuring out what unipolar worlds and/or homogeneous takeoff worlds imply for MMD research).
Last time we discussed the difference between information and a question or a problem, and I suggested that the novelty-satisfied mode of information presentation isn’t as good as addressing actual questions or problems. In chapter 3 which I have not typed up thoughts about, A three step procedure is introduced
Topic: “I am studying …”
Question: ”… because I want to find out what/why/how …”
Significance: ”… to help my reader understand …”
As we elaborate on the different kinds of problems, we will vary this framework and launch exercises from it.
Some questions raise problems, others do not. A question raises a problem if not answering it keeps us from knowing something more important than its answer.
The basic feedback loop introduced in this chapter relates practical with conceptual problems and relates research questions with research answers.
Practical problem -> motivates -> research question -> defines -> conceptual/research problem -> leads to -> research answer -> helps to solve -> practical problem (loop)
What should we do vs. what do we know—practical vs conceptual problems
Opposite eachother in the loop are practical problems and conceptual problems. Practical problems are simply those which imply uncertainty over decisions or actions, while conceptual problems are those which only imply uncertainty over understanding. Concretely, your bike chain breaking is a practical problem because you don’t know where to get it fixed, implying that the research task of finding bike shops will reduce your uncertainty about how to fix the bike chain.
Conditions and consequences
The structure of a problem is that it has a condition (or situation) and the (undesirable) consequences of that condition. The consequences-costs model of problems holds both for practical problems and conceptual problems, but comes in slightly different flavors. In the practical problem case, the condition and costs are immediate and observed. However, a chain of “so what?” must be walked.
Readers judge the significance of your problem not by the cost you pay but by the cost they pay if you don’t solve it… To make your problem their problem, you must frame it from their point of view, so that they see its cost to them.
One person’s cost may be another person’s condition, so when stating the cost you ought to imagine a socratic “so what?” voice, forcing you to articulate more immediate costs until the socratic voice has to really reach in order to say that it’s not a real cost.
The conceptual problem case is where intangibles play in. The condition in that case is always the simple lack of knowledge or understanding of something. The cost in that case is simple ignorance.
Modus tollens
A helpful exercise is if you find yourself saying “we want to understand x so that we can y”, try flipping to “we can’t y if we don’t understand x”. This sort of shifts the burden on the reader to provide ways in which we can y without understanding x. You can do this iteratively: come up with _z_s which you can’t do without y, and so on.
Pure vs. applied research
Research is pure when the significance stage of the topic-question-significance frame refers only to knowing, not to doing. Research is applied when the significance step refers to doing. Notice that the question step, even in applied research, refers to knowing or understanding.
Connecting research to practical consequences
You might find that the significance stage is stretching a bit to relate the conceptual understanding gained from the question stage. Sometimes you can modify and add a fourth step to the topic-question-significance frame and make it into topic-conceptual question-conceptual significance-possible practical application. Splitting significance into two helps you draw reasonable, plausible applications. A claimed application is a stretch when it is not plausible. Note: the authors suggest that there is a class of conceptual papers in which you want to save practical implications entirely for the conclusion, that for a certain kind of paper practical applications do not belong in the introduction.
AI safety
One characterisitic of AI safety that makes it difficult both to do and interface with is the chains of “so what” are often very long. The path from deconfusion research to everyone dying or not dying feels like a stretch if not done carefully, and has a lot of steps when done carefully. As I mentioned in my last post, it’s easy to get sucked into the “novel information for it’s own sake” regime at least as a reader. More practical oriented approaches are perhaps those that seek new regimes for how to even train models, and the “so what?” is answered “so we have dramatically less OODR-failures” or something. The condition-costs framework seems really beneficial for articulating alignment agendas and directions.
Misc
“Researchers often begin a project without a clear idea of what the problem even is.”
Look for problems as you read. When you see contradictions, inconsistencies, incomplete explanations tentatively assume that readers would or should feel the same.
Ask not “Can I solve it?” but “will my readers think it ought to be solved?”
“Try to formulate a question you think is worth answering, so that down the road, you’ll know how to find a problem others think is worth solving.”
I’m not aware of a literature or a dialogue on what I think is a very crucial divide in longtermism.
In this shortform, I’m going to take a polarity approach. I’m going to bring each pole to it’s extreme, probably each beyond positions that are actually held, because I think median longtermism or the longtermism described in the Precipice is a kind of average of the two.
Negative longtermism is saying “let’s not let some bad stuff happen”, namely extinction. It wants to preserve. If nothing gets better for the poor or the animals or the astronauts, but we dodge extinction and revolution-erasing subextinction events, that’s a win for negative longtermism.
In positive longtermism, such a scenario is considered a loss. From an opportunity cost perspective, the failure to erase suffering or bring to agency and prosperity to 1e1000 comets and planets hurts literally as bad as extinction.
Negative longtermism is a vision of what shouldn’t happen. Positive longtermism is a vision of what should happen.
My model of Ord says we should lean at least 75% toward positive longtermism, but I don’t think he’s an extremist. I’m uncertain if my model of Ord would even subscribe to the formation of this positive and negative axis.
What does this axis mean? I wrote a little about this earlier this year. I think figuring out what projects you’re working on and who you’re teaming up with strongly depends on how you feel about negative vs. positive longtermism. The two dispositions toward myopic coalitions are “do” and “don’t”. I won’t attempt to claim which disposition is more rational or desirable, but explore each branch
When Alice wants future X and Bob wants future Y, but if they don’t defeat the adversary Adam they will be stuck with future 0 (containing great disvalue), Alice and Bob may set aside their differences and choose form a myopic coalition to defeat Adam or not.
Form myopic coalitions. A trivial case where you would expect Alice and Bob to tend toward this disposition is if X and Y are similar. However, if X and Y are very different, Alice and Bob must each believe that defeating Adam completely hinges on their teamwork in order to tend toward this disposition, unless they’re in a high trust situation where they each can credibly signal that they won’t try to get a head start on the X vs.Y battle until 0 is completely ruled out.
Don’t form myopic coalitions. A low trust environment where Alice and Bob each fully expect the other to try to get a head start on X vs.Y during the fight against 0 would tend toward the disposition of not forming myopic coalitions. This could lead to great disvalue if a project against Adam can only work via a team of Alice and Bob.
An example of such a low-trust environment is, if you’ll excuse political compass jargon, reading bottom-lefts online debating internally the merits of working with top-lefts on projects against capitalism. The argument for coalition is that capitalism is a formiddable foe and they could use as much teamwork as possible; the argument against coalition is historical backstabbing and pogroms when top-lefts take power and betray the bottom-lefts.
For a silly example, consider an insurrection against broccoli. The ice cream faction can coalition with the pizzatarians if they do some sort of value trade that builds trust, like the ice cream faction eating some pizza and the pizzatarians eating some ice cream. Indeed, the viciousness of the fight after broccoli is abolished may have nothing to do with the solidarity between the two groups under broccoli’s rule. It may or may not be the case that the ice cream faction and the pizzatarians can come to an agreement about best to increase value in a post-broccoli world. Civil war may follow revolution, or not.
Now, while I don’t support long reflection (TLDR I think a collapse of diversity sufficient to permit a long reflection would be a tremendous failure), I think elements of positive longtermism are crucial for things to improve for the poor or the animals or the astronauts. I think positive longtermism could outperform negative longtermism when it comes to finding synergies between the extinction prevention community and the suffering-focused ethics community. However, I would be very upset if I turned around in a couple years and positive longtermists were, like, the premiere face of longtermism. The reason for this is once you admit positive goals, you have to deal with everybody’s political aesthetics, like a philosophy professor’s preference for a long reflection or an engineer’s preference for moar spaaaace or a conservative’s preference for retvrn to pastorality or a liberal’s preference for intercultural averaging. A negative goal like “don’t kill literally everyone” greatly lacks this problem. Yes, I would change my mind about this if 20% of global defense expenditure was targeted at defending against extinction-level or revolution-erasing events, then the neglectedness calculus would lead us to focus the by comparison smaller EA community on positive longtermism.
The takeaway from this shortform should be that quinn thinks negative longtermism is better for forming projects and teams.
Writers can’t avoid creating some role for themselves and their readers, planned or not
Before considering the role you’re creating for your reader, consider the role you’re creating for yourself. Your broad options are the following
I’ve found some new and interesting information—I have information for you
I’ve found a solution to an important practical problem—I can help you fix a problem
I’ve found an answer to an important question—I can help you understand something better
The authors recommend assuming one of these three. There is of course a wider gap between information and the neighborhood of problems and questions than there is between problems and questions! Later on in chapter four the authors provide a graph illustrating problems and questions: Practical problem -> motivates -> Research question -> defines -> Conceptual/research problem. Information, when provided mostly for novelty, however, is not in this cycle. Information can be leveled at problems or questions, plays a role in providing solutions or answers, but can also be for “its own sake”.
I’m reminded of a paper/post I started but never finished, on providing a poset-like structure to capabilities. I thought it would be useful if you could give a precise ordering on a set of agents, to assign supervising/overseeing responsibilities. Looking back, providing this poset would just be a cool piece of information, effectively: I wasn’t motivated by a question or problem so much as “look at what we can do”. Yes, I can post-hoc think of a question or a problem that the research would address, but that was not my prevailing seed of a reason for starting the project. Is the role of the researcher primarily a writing thing, though, applying mostly to the final draft? Perhaps it’s appropriate for early stages of the research to involve multi-role drifting, even if it’s better for the reader experience if you settle on one role in the end.
Additionally, it occurs to me that maybe “I have information for you” mode just a cheaper version of the question/problem modes. Sometimes I think of something that might lead to cool new information (either a theory or an experiment), and I’m engaged moreso by the potential for novelty than I am by the potential for applications.
I think I’d like to become more problem-driven. To derive possibilities for research from problems, and make sure I’m not just seeking novelty. At the end of the day, I don’t think these roles are “equal” I think the problem-driven role is the best one, the one we should aspire to.
[When you adopt one of these three roles, you must] cast your readers in a complementary role by offering them a social contract: _I’ll play my part if you play yours … if you cast them in a role they won’t accept, you’re likely to lose them entirely… You must report your research in a way that motivates your readers to play the role you have imagined for them.
The three reader roles complementing the three writer roles are
Entertain me
Help me solve my practical problem
Help me understand something better
It’s basically stated that your choice of writer role implies a particular reader role, 1 mapping to 1, 2 mapping to 2, and 3 mapping to 3.
Role 1 speaks to an important difficulty in the x-risk, EA, alignment community; which is how not to get drawn into the phenomenal sensation of insight when something isn’t going to help you on a problem. At my local EA meetup I sometimes worry that the impact of our speaker events is low, because the audience may not meaningfully update even though they’re intellectually engaged. Put another way, intellectual engagement can be goodhartable, the sensation of insight can distract you from your resolve to shatter your bottlenecks and save the world if it becomes an end itself. Should researchers who want to be careful about this avoid the first role entirely? Should the alignment literature look upon the first reader role as a failure mode? We talk about a lot of cool stuff, it can be easy to be drawn in by the cool factor like some of the non-EA rationalists I’ve met at meetups.
I’m not saying reader role number two absolutely must dominate, because it can diverge from deconfusion which is better captured by reader role number three.
Division of labor between reader and writer, writer roles do not always imply exactly one reader role
Isn’t it the case that deconfusion/writer role three research can be disseminated to practical (as opposed to theoretical) -minded people, and then those people turn question-answer into problem-solution? You can write in the question-answer regime, but there may be that (rare) reader who interprets it in the problem-solution regime! This seems to be an extremely good thing that we should find a way to encourage. In general reading the drifts across multiple roles seems like the most engaged kind of reading.
Would there be a way of estimating how many people within the amazon organization are fanatical about same day delivery ratio against how many are “just working a job”? Does anyone have a guess? My guess is that an organization of that size with a lot of cash only needs about 50 true fanatics, the rest can be “mere employees”. What do yall think?
I can’t really think of any research bearing on this, and unclear how you’d measure it anyway.
One way to go might be to note that there is a wide (and weird) variance between the efficiency of companies: market pressures are slack enough that two companies doing as far as can be told the exact same thing in the same geographic markets with the same inputs might be almost 100% different (I think was the range in the example of concrete manufacturing in one paper I read); a lot of that difference appears to be explainable by the quality of the management, and you can do randomized experiments in management coaching or intensity of management and see substantial changes in the efficiency of a company (Bloom—the other one—has a bunch of studies like this). Presumably you could try to extrapolate from the effects of individuals to company-wide effects, and define the goal of the ‘fanatical’ as something like ‘maintaining top-10% industry-wide performance’: if educating the CEO is worth X percentiles and hiring a good manager is worth 0.0Y percentiles and you have such and such a number of each, then multiply out to figure out what will bump you 40 percentiles from an imagined baseline of 50% to the 90% goal.
Another argument might be a more Fermi estimate style argument from startups. A good startup CEO should be a fanatic about something, otherwise they probably aren’t going to survive the job. So we can assume one fanatic at least. People generally talk about startups beginning to lose the special startup magic of agility, focus, and fanaticism at around Dunbar’s number level of employees like 300, or even less (eg Amazon’s two-pizza rule which is I guess 6 people?). In the ‘worst’ case that the founder has hired 0 fanatics, that implies 1 fanatic can ride herd over no more than ~300 people; in the ‘best’ case that he’s hired dozens, then each fanatic can only cover for more like 2 or 3 non-fanatics. I’m not sure how we should count Amazon’s employees: do the warehouse workers, often temps, really count? They are so micro-managed and driven by the warehouse operation that they hardly seem even relevant to the question. I can’t quickly find that number, just totals, but let’s say there’s like 100,000 non-warehouse-ish employees; at a 300:1 ratio, you’d need 333, and at 3:1, 33,333. The former might be feasible, the latter not so much. (And would explain why Amazon.com seems to be a gradually degrading shopping experience—so many ads! Why are there ads getting in my way when I’m trying to give you my money already, Amazon!)
I’m not sure “fanatical” is well-defined enough to mean anything here. I doubt there are any who’d commit terrorist acts to further same-day delivery. There are probably quite a few who believe it’s important to the business, and a big benefit for many customers.
You’re absolutely right that a lot of employees and contractors can be “mere employees”, not particularly caring about long-term strategy, customer perception, or the like. That’s kind of the nature of ALL organizations and group behaviors, including corporate, government, and social groupings. There’s generally some amount of influencers/selectors/visionaries, some amount of strategists and implementers, and a large number of followers. Most organizations are multidimensional enough that the same people can play different roles on different topics as well.
I don’t think it needs any true fanatics. It just needs incentives.
This isn’t to say there won’t be fanatics anyway. There probably aren’t many things that nobody can get fanatical about. This is even more true if they’re given incentives to act fanatical about it.
I don’t think it needs any true fanatics. It just needs incentives.
Sure, but the incentive structure needs continual maintenance to keep it aligned with or pointing at the goal, which naturally leads to the questions of how many people are needed to keep the structure pointing at the goal, and what the motivation of those people will be.
We need a name for the following heuristic, I think, I think of it as one of those “tribal knowledge” things that gets passed on like an oral tradition without being citeable in the sense of being a part of a literature. If you come up with a name I’ll certainly credit you in a top level post!
I heard it from Abram Demski at AISU′21.
Suppose you’re either going to end up in world A or world B, and you’re uncertain about which one it’s going to be. Suppose you can pull lever LA which will be 100 valuable if you end up in world A, or you can pull lever LB which will be 100 valuable if you end up in world B. The heuristic is that if you pull LA but end up in world B, you do not want to have created disvalue, in other words, your intervention conditional on the belief that you’ll end up in world A should not screw you over in timelines where you end up in world B.
This can be fully mathematized by saying “if most of your probability mass is on ending up in world A, then obviously you’d pick a lever L such that V(L|A) is very high, just also make sure that V(L|B)>=0 or creates an acceptably small amount of disvalue.”, where V(L|A) is read “the value of pulling lever L if you end up in world A”
Why are you specifying 100 or 0 value, and using fuzzy language like “acceptably small” for disvalue?
Is this based on “value” and “disvalue” being different dimensions, and thus incomparable? Wouldn’t you just include both in your prediction, and run it through your (best guess of) utility function and pick highest expectation, weighted by your probability estimate of which universe you’ll find yourself in?
Why are you specifying 100 or 0 value, and using fuzzy language like “acceptably small” for disvalue?
100 and 0 in this context make sense. Or at least in my initial reading: arbitrarily-chosen values that are in a decent range to work quickly with (akin to why people often work in percentages instead of 0..1)
Is this based on “value” and “disvalue” being different dimensions, and thus incomparable?
It is—I’m going to say “often”, although I am aware this is suboptimal phrasing—often the case that you are confident in the sign of an outcome but not the magnitude of the outcome.
As such, you can often end up with discontinuities at zero.
Wouldn’t you just include both in your prediction, and run it through your (best guess of) utility function and pick highest expectation, weighted by your probability estimate of which universe you’ll find yourself in?
Dropping the entire probability distribution of outcomes through your utility function doesn’t even necessarily have a closed-form result. In a universe where computation itself is a cost, finding a cheaper heuristic (and working through if said heuristic has any particular basis or problems) can be valuable.
The heuristic in the grandparent comment is just what happens if you are simultaneously very confident in the sign of positive results, and have very little confidence in the magnitude of negative results.
It is often the case that you are confident in the sign of an outcome but not the magnitude of the outcome.
This heuristic is what happens if you are simultaneously very confident in the sign of positive results, and have very little confidence in the magnitude of negative results.
I’m not sure I understand. If the lever is +100 in world A and −90 in world B, it seems like a good bet if you don’t know which world you’re in. Or is that what you mean by “acceptably small amount of disvalue”?
Obviously there are considerations downstream of articulating this, one is that when P(A)>P(B) but V(LA|A)<V(LB|B) so it’s reasonable to hedge on ending up in world B even though it’s not strictly more probable than ending up in world A.
I think one of the most crucial meta skills i’ve developed is honing my sense of who’s criticizing me vs. who’s complaining.
A criticism is actionable, implicitly often it’s from someone who wants you to win. A complaint is when you can’t figure out how you’d actionably fix something or improve based on what you’re being told.
This simple binary story is problematic. It can empower you to ignore criticism you don’t like by providing a set of excuses, if you’re not careful. Sometimes it’s operationally impossible to parse out a criticism that runs so deep that it unsettles your premises from a complaint! I think people who are building things can be excused for ignoring advice if the only actionable way of accepting that advice is to completely overhaul their approach, for reasons of focus and other logistical concerns. If it’s that rare time in a project when you are going back to the drawing board and starting over, that’s definitely time to mine complaints for useful insight.
Related: the legend of the amazon customer in the 90s who was insatiably filling out customer feedback forms, to the point where 2000s or 2010s amazon named a boardroom after him. The idea was that this guy helped them improve a lot—surely it would have been easy to dismiss him as a complainer, but they didn’t, they found actionable advice within the complaints. I think your ability to take something that isn’t intended to help you, isn’t actionable on it’s face, and mining it for actionable insight can be very important. But for filtering, for attention, for sanity, dismissing something quickly because it doesn’t seem like it can help you or the project improve can be valid as well.
Anchoring effect is enough for a Schelling point, it doesn’t have to be simple solution.
For instance a new nation that wants to move away from dictatorship is automatically going to build a democracy with multiple independent arms (legislature, judiciary, executive), a constitution, periodic elections of representatives, etc.
They could choose to try a direct democracy or change the term from 5 years to 1 year, they could choose to have public elections for the judiciary too, or any other deviation from how democracies usually run, but they won’t. Fear of the unknown + no creativity or motivation will be sufficient from them to copy existing countries’ democratic structure.
Disvalue via interpersonal expected value and probability
My deontologist friend just told me that treating people like investments is no way to live. The benefits of living by that take are that your commitments are more binding, you actually do factor out uncertainty, because when you treat people like investments you always think “well someday I’ll no longer be creating value for this person and they’ll drop me from their life”. It’s hard to make long term plans, living like that.
I’ve kept friends around out of loyalty to what we shared 5-10 years ago while questioning an expected value theory or probability theory based value prop. So I’m not, like, super guilty of this or anything. But overall I do take expected value theory and probability theory into interpersonal matters, and I don’t object when others do the same for me. Though it’s hard sometimes, I think it’s basically fine if someone drops me because I’m not adding value for them. An edge case in the opposite direction is that you’re obligated to build deep friendships with every acquaintance, which is also a little silly. But a sweet spot, like a marriage or other way of teaming up (like for a project) might meaningfully call for a suspension of expected value theory and probability theory.
One thing to be careful about in such decisions—you don’t know your own utility function very precisely, and your modeling of both future interactions and your value from such are EXTREMELY lossy.
The best argument for deontological approaches is that you’re running on very corrupt hardware, and rules that have evolved and been tested over a long period of time are far more trustworthy than your ad-hoc analysis which privileges obvious visible artifacts over more subtle (but often more important) considerations.
Imo choosing to disconnect from people who are no longer providing any value to you is just a healthy thing to do, even a deontologist should agree with that.
I may refine this into a formal bounty at some point.
I’m curious if censorship would actually work in the context of blocking deployment of superpowerful AI systems. Sometimes people will mention “matrix multiplication” as a sort of goofy edge case, which isn’t very plausible, but that doesn’t mean there couldn’t be actual political pressure to censor it. A more plausible example would be attention. Say the government threatens soft power against arxiv if they don’t pull attention is all you need, or threatens soft power against harvard if their linguistics department doesn’t pull the pytorch-annotated attention is all you need. By this point, it goes without saying that black hat hackers writing down the equations would face serious consequences if they got caught. Now instead of attention, imagine some more galaxy-brained paper or insight that gets published in 2028 and is an actual missing ingredient to advanced AI (assuming you’re not one of the people who think attention is all you need already is that paper).
While it’s certainly a research project to look at pros and cons of this approach to safety from AI, I think before that we need someone to profile efficacy of technological censorship through history to come at an estimate of how well this would work, i.e., how well it would actually slow or stop the propagation of this information, how well it would slow or stop the deployment of systems based on that information.
My guess at who the ideal person to execute on this bounty would be some patent law nerd, tho I’m sure a variety of types of nerd could do a great job.
You’ll need a govt body full of people who are aligned in their thinking, no one should defect.
Also Yudkowsky’s response to this would prolly be that it isn’t enough to censor the first time the idea is created, someone else will just discover another (or the same) path to AGI independently. See pivotal act.
any literature on estimates of social impact of businesses divided by their valuations?
the idea that dollars are a proxy for social impact is neat, but leaves a lot of room for goodhart and I think it’s plausible that they diverge entirely in cases. It would be useful to know, if possible to know, what’s going on here.
Why have I heard about Tyson investing into lab grown, but I haven’t heard about big oil investing in renewable?
Tyson’s basic insight here is not to identify as “an animal agriculture company”. Instead, they identify as “a feeding people company”. (Which happens to align with doing the right thing, conveniently!)
It seems like big oil is making a tremendous mistake here. Do you think oil execs go around saying “we’re an oil company”? When they could instead be going around saying “we’re a powering stuff” company. Being a powering stuff company means you have fuel source indifference!
I mean if you look at all the money they had to spend on disinformation and lobbying, isn’t it insultingly obvious to say “just invest that money into renewable research and markets instead”?
Is there dialogue on this? Also, have any members of “big oil” in fact done what I’m suggesting, and I just didn’t hear about it?
It seems like big oil is making a tremendous mistake here. Do you think oil execs go around saying “we’re an oil company”? When they could instead be going around saying “we’re a powering stuff” company. Being a powering stuff company means you have fuel source indifference!
The main problem is that prior investment into the oil method of powering stuff doesn’t translate into having a comparative advantage in a renewable way of powering stuff. They want a return on their existing massive investments.
While this looks superficially like a sunk cost fallacy, it isn’t. If a comparatively small investment (mere billions) can ensure continued returns on their trillions of sunk capital for another decade, it’s worth it to them.
Investment into renewable powering stuff would require substantially different skill sets in employees, in very different locations, and highly non-overlapping investment. At best, such an endeavour would constitute a wholly owned subsidiary that grows while the rest of the company withers. At worst, a parasite that hastens the demise of the parent while eventually failing in the face of competition anyway.
I’ve had a background assumption in my interpretation of and beliefs about reward functions for as long as I can remember (i.e. since first reading the sequences), that I suddenly realized I don’t believe is written down. Over the last two years I’ve gained experience writing coq sufficient to inspire a convenient way of framing it.
Computational vs axiomatic reward functions
Computational vs axiomatic in proof engineering
A proof engineer calls a proposition computational if it’s proof can be broken down into parts.
For example, a + (b + c) = (a + b) + c is computational because you can think of it’s proof as the application of the associativity lemma then the application of something called a “refl”, the fundamental termination of a proof involving equality. Passing around the associativity lemma is in a sense passing around it’s proof, which assuming a is inductive (take nat; zero and successor) is an application of nat’s induction principle, unpacking the recursive definition of +, etc.
In other words, if my adversary asks “why is a + (b + c) = (a + b) + c I can show them; I only have to make sure they agree to the fundamental definitions of nat and + : nat -> nat -> nat, the rest I can compel them to believe.
On the flip side, consider function extensionality, or f = g <-> forall x, f x = g x, not provable because we do not know that the domain of f (which equals the domain of g) is countable, to name but one scenario. Because they can’t prove it, theories “admit function extensionality as an axiom” from time to time.
In other words, if I invoke function extensionality in a proof, and my adversary has agreed to the basic type and function definitions, they remain entitled to reject my proof because if they ask why I believe function extensionality the best I can do is say “because I declared it on line 7”.
We do not call reasoning involving axioms computational. Instead, the discourse has sort of become poisoned by the axiom; it’s verificational properties have become weaker. (Intuitively, I can declare on line 7 anything I want; the risk of proving something that is actually false increases a great deal with each axiom I declare).
Apocryphally, a lecturer recalled a meeting perhaps of the univalent foundations group at IAS, when homotopy type theory (HoTT) was brand new (HoTT is based on something called univalence, which is about reasoning on type equalities in arbitrary “universes” (“kinds” for the haskell programmer)). In HoTT 1.0, univalence relied on an axiom (done carefully of course, to minimize the damage of the poison) and Per Martin-Lof is said to have remarked “it’s not really type theory if there’s an axiom”. HoTT 2.0 called cubical type theory repairs this, which is why cubical tt is sometimes called computational tt.
AIXI-like and AIXI-unlike AGIs
If the space of AGIs can be carved into AIXI-like and AIXI-unlike with respect to goals, clearly AIXI-like architectures have goals imposed on them axiomatically by the programmer. The complement of course is where the reward function is computational; decomposable.
See the NARS literature of Wang et. al. for something at least adjacent to AIXI-unlike—reasoning about NARS emphasizes that reward functions can be computational to an extent, but “bottom out” at atoms eventually. Still, NARS goals are computational to a far greater degree than AIXI-likes.
Conjecture: humans are AIXI-unlike AGIs
This should be trivial: humans can decompose their reward functions in ways richer than “because god said so”.
Relation to mutability???
If the space of AGIs can be carved into AIXI-like and human-like with respect to goals, does the computationality question help me reason about modifying my own reward function? Intuitively, AIXI’s axiomatic goal corresponds to immutability. However, I don’t think there’s a for-free implication that AIXI-unlikes get self-modification for-free. More work needed.
Position of this post in my overall reasoning
In general, my basic understanding that the AGI space can be divided into what I’ve called AIXI-like and AIXI-unlike with respect to how reward functions are reasoned about, and that computationality (anaxiomaticity vs axiomaticity?) is the crucial axis to view, is deeply embedded in my assumptions. Maybe writing it down will make eventually changing my mind about this easier: I’m uncertain just how conventional my belief/understanding is here.
I should be more careful not to imply I think that we have solid specimens of computational reward functions; more that I think it’s a theoretically important region of the space of possible minds, and might factor in idealizations of agency
I come to you with a dollar I want to spend on AI. You can allocate p pennies to go to capabilities and 100-p pennies to go to alignment, but only if you know of a project that realizes that allocation. For example, we might think that GAN research sets p = 98 (providing 2 cents to alignment) while interpretability research sets p = 10 (providing 90 cents to alignment).
Is this remotely useful? This is a really rough model (you might think it’s more of a venn diagram and that this model doesn’t provide a way of reasoning about the double counting problem).
a task: rate research areas, even whole agendas, with such value p. Many people may disagree about my example assignments to GANs and interpretability, or think both of those are too broad.
What are some alternatives to the splitting a dollar intuition?
To say something is capabilities-prone is less to say a dollar has been cleanly split, and more to say that there are some dynamics that sort of tend toward or get pushed toward different directions. Perhaps I want some sort of fluid metaphor instead.
Question your argument as your readers will—thoughts on chapter 10 of Craft of Research
Three predictable disagreements are
There are causes in addition to the one you claim
What about these counterexamples?
I don’t define X as you do, to me X means...
There are roughly two kinds of queries readers will have about your argument
intrinsic soundness—“challenging the clarity of a claim, relevance of reasons, or quality of evidence”
extrinsic soundness—“different ways of framing the problem, evidence you’ve overlooked, or what others have written on the topic.”
The idea is to anticipate, acknowledge, and respond to both kinds of questions. This is the path to making an argument that readers will trust and accept.
Voicing too many hypothetical objections up front can paralyze you. Instead, what you should do before anything else is focus on what you want to say. Give that some structure, some meat, some life. Then, an important exercise is to imagine readers’ responses to it.
I think cleaving these into two highly separated steps is an interesting idea, doing this with intention may be a valuable exercise next time I’m writing something.
View your argument through the eyes of someone who has a stake in a different outcome, someone who wants you to be wrong.
The authors provide some questions about your problem from a possible reader:
Why is your practical/conceptual solution better than others?
Then, they provide some questions about your support from a possible reader.
“I want to see a different kind of evidence” i.e. hard numbers over anecdotes / real people over cold numbers
“It isn’t accurate”
“It isn’t precise enough”
“It isn’t current”
“It isn’t representative”
“It isn’t authoritative”
“You need more evidence”
It builds credibility to play defense: to recognize your own argument’s limitations. It builds even more credibility to play offense: to explore alternatives to your argument and bring them into your reasoning. If you can, you might develop those alternatives in your own imagination, but more likely you’d like to find alternatives in your sources.
Often your readers will be likeyour sources’ authors; sometimes they may even include them.
What is the perfect amount of objections to acknowledge? Acknowledging too many can distract readers from the core of your argument, while acknowledging too few is a signal of laziness or even disrespect. You need to narrow your list of alternatives or objections by subjecting them to the following priorities
plausible charges of weaknesses that you can rebut
alternative lines of argument important in your field
alternative conclusions that readers want to be true
alternative evidence that readers know
important counterexamples that youu have to address.
What if your argument is flawed? The best thing to do is candidly acknowledge the issue and respond that...
the rest of your argument more than balances the flaw
while the flaw is serious, more research will show a way around it
while the flaw makes it impossible to accept your claim fully, your argument offers important insight into the question and suggests what a better answer would need.
It is wise to build up good faith by acknowledging questions you can’t answer. Concessions are often interpreted as positive signals by the reader.
It is important for your responses to acknowledgments to be subordinate to your main point, or else the reader will miss the forest for the trees.
Remember to make an intentional decision about how much credence to give to an objection or alternative. Weaker ones imply weaker credences, imply less effort in your acknowledgment and response.
there’s a gap in my inside view of the problem, part of me thinks that capabilities progress such as out-of-distribution robustness or the 4 tenets described in open problems in cooperative ai is necessary for AI to be transformative, i.e. a prereq of TAI, and another part of me that thinks AI will be xrisky and unstable if it progresses along other aspects but not along the axis of those capabilities.
There’s a geometry here of transformative / not transformative cross product with dangerous not dangerous.
To have an inside view I must be able to adequately navigate between the quadrants with respect to outcomes, interventions, etc.
If something can learn fast enough, then it’s out-of-distribution performance won’t matter as much. (OOD performance will still matter -but it’ll have less to learn where it’s good, and more to learn where it’s not.*)
*Although generalization ability seems like the reason learning matters. So I see why it seems necessary for ‘transformation’.
Good arguments—notes on Craft of Research chapter 7
Arguments take place in 5 parts.
This can be modeled as a conversation with readers, where the reader prompts the writer to taking the next step on the list.
Claim ought to be supported with reasons. Reasons ought to be based on evidence. Arguments are recursive: a part of an argument is an acknowledgment of an anticipated response, and another argument addresses that response. Finally, when the distance between a claim and a reason grows large, we draw connections with something called warrants.
The logic of warrants proceeds in generalities and instances. A general circumstance predictably leads to a general consequence, and if you have an instance of the circumstance you can infer an instance of the consequence.
Arguing in real life papers is complexified from the 5 steps, because
Claims should be supported by two or more reasons
A writer can anticipate and address numerous responses. As I mentioned, arguments are recursive, especially in the anticipated response stage, but also each reason and warrant can necessitate a subargument.
thoughts on chapter 9 of Craft of Research
We saw previously that claims ought to be supported with reasons, and reasons ought to be based on evidence. Now we will look closer at reasons and evidence.
Reasons must be in a clear, logical order. Atomically, readers need to buy each of your reasons, but compositionally they need to buy your logic. Storyboarding is a useful technique for arranging reasons into a logical order: physical arrangements of index cards, or some DAG-like syntax. Here, you can list evidence you have for each reason or, if you’re speculating, list the kind of evidence you would need.
When storyboarding, you want to read out the top level reasons as a composite entity without looking at the details (evidence), because you want to make sure the high-level logic makes sense.
I think there is a contract between you and the reader. You must agree to cite sources that are plausibly truthful, and your reader must agree to accept that these sources are reliable. A diligent and well-meaning reader can always second-guess whether, for instance, the beureau of subject matter statistics is collecting and reporting data correctly, but at a certain point this violates the social contract. If they’re genuinely curious or concerned, it may fall on them to investigate the source, not on you. The bar you need to meet is that your sources are plausibly trustworthy. The book doesn’t talk much about this contract, so there’s little I can say about what “plausible” means.
Sometimes you have to be extra careful to distinguish reasons from evidence, a
(<claim>, <reason>, <evidence>)
tuple is subject to regress in the latter two components,(A, B, C)
may need to be justified by(B, C, D)
and so on. The example given of this regress is if I told you(american higher education must curb escalating tuition costs, because the price of college is becoming an impediment to the american dream, today a majority of students leave college with a crushing debt burden)
. In the context of this sentence, “a majority of students...” is evidence, but it would be reasonable to ask for more specifics. In principle, any time information is compressed it may be reasonable to ask for more specifics. A new tuple might look like(the price of college is becoming an impediment to the american dream, because today a majority of students leave college with a crushing debt burden, in 2013 nearly 70% of students borrowed money for college with loans averaging $30000...)
. The third component is still compressing information, but it’s not in the contract between you and the reader for the reader to demand the raw spreadsheet, so this second tuple might be a reasonable stopping point of the regress.Sometimes you have to be careful to distinguish evidence from reports of it. Again, because we are necessarily dealing with compressed information, we can’t often point directly to evidence. Even a spreadsheet, rather than summary statistics of it, is a compression of the phenomena in base reality that it tracks.
There is a criteria you want to screen your evidence with respect to.
sufficient
representative
accurate
precise
authoritative
Being honest about the reliability and prospective accuracy of evidence is always a positive signal. Evidence can be either too precise or not precise enough. The women in one or two of Shakespeare’s plays do not represent all his women, they are not representative. Figure out what sorts of authority signals are considered credible in your community, and seek to emulate them.
Sources—notes on Craft of Research chapters 5 and 6
Primary, secondary, and tertiary sources
The distinction between primary and secondary sources comes from 19th century historians, and the idea of tertiary sources came later. The boundaries can be fuzzy, and are certainly dependent on the task at hand.
I want to reason about what these distinctions look like in the alignment community, and whether or not they’re important.
The rest of chapter five is about how to use libraries and information technologies, and evaluating sources for relevance and reliability.
Chapter 6 starts off with the kind of thing you should be looking for while you read
Look for creative agreement
Offer additional support. You can offer new evidence to support a source’s claim.
Confirm unsupported claims. You can prove something that a source only assumes or speculates about.
Apply a claim more widely. You can extend a position.
Look for creative disagreement
Contradictions of kind. A source says something is one kind of thing, but it’s another.
Part-whole contradictions. You can show that a source mistakes how the parts of something are related.
Developmental or historical contradictions. You can show that a source mistakes the origin or development of a topic.
External cause-effect contradictions. You can show that a source mistakes a causal relationship.
Contradictions of perspective. Most contradictions don’t change a conceptual framework, but when you contradict a “standard” view of things, you urge others to think in a new way.
The rest of chapter 6 is a few more notes about what you’re looking for while reading (evidence, reasons), how to take notes, and how to stay organized while doing this.
The alignment community
I think I see the creative agreement modes and the creative disagreement modes floating around in posts. Would it be more helpful if writers decided on one or two of these modes before sitting down to write?
Moreover, what is a primary source in the alignment community? Surely if one is writing about inner alignment, a primary source is the Risks from Learned Optimization paper. But what are Risks’ primary, secondary, tertiary sources? Does it matter?
Now look at Arbital. Arbital started off to be a tertiary source, but articles that seemed more like primary sources started appearing there. I remember distinctively thinking “what’s up with that?” it struck me as awkward for Arbital to change it’s identity like that, but I end up thinking about and citing the articles that seem more like primary sources.
There’s also the problem of stuff in the memeplex not written down is the real “primary” source while the first person who happens to write it down looks like they’re writing a primary source when in fact what they’re doing is really more like writing a secondary or even tertiary source.
Yesterday I quit my job for direct work on epistemic public goods! Day one of direct work trial offer is April 4th, and it’ll take 6 weeks after that to know if I’m a fulltime hire.
I’m turning down
raise to 200k/yr usd
building lots of skills and career capital that would give me immense job security in worlds where investment into one particular blockchain doesn’t go entirely to zero
having fun on the technical challenges
for
confluence of my skillset and a theory of change that could pay huge dividends in the epistemic public goods space
0.35x paycut from my upcoming raise
uncertainty of it being a trial offer.
having fun on the technical challenges
Which I’m flagging in such detail to give you strength if you’re ever reasoning about your risk tolerance and your goals, just remember, “look at what quinn did!”
nonprosaic ai will not be on short timelines
I think a property of my theory of change is that academic and commercial speed is a bottleneck. I recently realized that my mass assignment for timelines synchronized with my mass assignment for the prosaic/nonprosaic axis. The basic idea is that let’s say a radical new paper that blows up and supplants the entire optimization literature gets pushed to the arxiv tomorrow, signaling the start of some paradigm that we would call nonprosaic. The lag time for academics and industry to figure out what’s going on, figure out how to build on that result, for developer ecosystems to form, would all compound to take us outside of what we would call “short timelines”.
How flawed is this reasoning?
The reasoning assumes that ideas are first generated in academia and don’t arise inside of companies. With DeepMind outperforming the academic protein folding community when protein folding isn’t even the main focus of DeepMind I consider it plausible that new approaches arise within a company and get only released publically when they are strong enough to have an effect.
Even if there’s a paper most radical new papers get ignored by most people and it might be that in the beginning only one company takes the idea seriously and doesn’t talk about it publically to keep a competive edge.
That’s totally fair, but I have a wild guess that the pipeline from google brain to google products is pretty nontrivial to traverse, and not wholly unlike the pipeline from arxiv to product.
How short is “short” for you?
Like, AlexNet was 2012, DeepMind patented deep Q learning in 2014, the first TensorFlow release was 2015, the first PyTorch release was 2016, the first TPU was 2016, and by 2019 we had billion-parameter GPT-2 …
So if you say “Short is ≤2 years”, then yeah, I agree. If you say “Short is ≤8 years”, I think I’d disagree, I think 8 years might be plenty for a non-prosaic approach. (I think there are a lot of people for whom AGI in 15-20 years still counts as “short timelines”. Depends on who you’re talking to, I guess.)
I should’ve mentioned in OP but I was lowkey thinking upper bound on “short” would be 10 years.
I think developer ecosystems are incredibly slow (longer than ten years for a new PL to gain penetration, for instance). I guess under a singleton “one company drives TAI on its own” scenario this doesn’t matter, because tooling tailored for a few teams internal to the same company is enough which can move faster than a proper developer ecosystem. But under a CAIS-like scenario there would need to be a mature developer ecosystem, so that there could be competition.
I feel like 7 years from AlexNet to the world of PyTorch, TPUs, tons of ML MOOCs, billion-parameter models, etc. is strong evidence against what you’re saying, right? Or were deep neural nets already a big and hot and active ecosystem even before AlexNet, more than I realize? (I wasn’t paying attention at the time.)
Moreover, even if not all the infrastructure of deep neural nets transfers to a new family of ML algorithms, much of it will. For example, the building up of people and money in ML, the building up of GPU / ASIC servers and the tools to use them, the normalization of the idea that it’s reasonable to invest millions of dollars to train one model and to fab ASICs tailored to a particular ML algorithm, the proliferation of expertise related to parallelization and hardware-acceleration, etc. So if it took 7 years from AlexNet to smooth turnkey industrial-scale deep neural nets and billion-parameter models and zillions of people trained to use them, then I think we can guess <7 years to get from a different family of learning algorithms to the analogous situation. Right? Or where do you disagree?
No you’re right. I think I’m updating toward thinking there’s a region of nonprosaic short-timelines universes. Overall it still seems like that region is relatively much smaller than prosaic short-timelines and nonprosaic long-timelines, though.
Excellence and adequacy
I asked a friend whether I should TA for a codeschool called ${{codeschool}}.
A hidden claim there that I would soak up the pursuit of non-excellence by proximity or osmosis isn’t what’s interesting (though I could see that turning out either way). What’s interesting is the value of non-excellence, which I’ll call adequacy.
${{codeschool}} in this case is effective and impactful at putting butts in seats at companies, and is thereby responsible for some negligible slice of economic growth. It’s students and instructors are plentiful with the virtue of getting things done, do they really need the virtue of high-craftsmanship? The student who reads SICP and TAPL because they’re pursuing mastery over the very nature of computation is strictly less valuable to the economy than the student who reads react tutorials because they’re pursuing some cash.
Obviously, my friend who was telling me this was of the SICP/TAPL type. In software, this is problematic: lisp and type theory will increase your thinking about the nature of computation, but will it increase your thinking about the social problem of steering a team? From an employer’s perspective, it is naive to prefer excellence over adequacy, it is much wiser to saddle the excellent person with the burden of proving that they won’t get bored easily.
Hufflepuffs can go far, and the fuel is adequacy. Enough competence to get it done, any more is egotistical, a sunk cost.
But what if it’s not about industry/markets, what if it’s about the world’s biggest problems? Don’t we want people who are more competent than strictly necessary to be working on them? Maybe, maybe not.
Related: explore/exploit, become great/become useful
For a long time I’ve operated in the excellence mindset: more energy for struggling with textbooks than for exploiting the skills I already have to ship projects and participate in the real world. Thinking it might be good to shift gears and flex my hufflepuff virtues more.
Seems to me that on the market there are very few jobs for the SICP types.
The more meta something is, the less of that is needed. If you can design an interactive website, there are thousands of job opportunities for you, because thousands of companies want an interactive website, and somehow they are willing to pay for reinventing the wheel. If you can design a new programming language and write a compiler for it… well, it seems that world already has too many different programming languages, but sure there is a place for maybe a dozen more. The probability of success is very small even if you are a genius.
The best opportunity for developers who think too meta is probably to design a new library for an already popular programming language, and hope it becomes popular. The question is how exactly you plan to get paid for that.
Probably another problem is that it requires intelligence to recognize intelligence, and it requires expertise to recognize expertise. The SICP type developer seems to most potential employers and most potential colleagues as… just another developer. The company does not see individual output, only team output; it does not matter that your part of code does not contain bugs, if the project as a whole does. You cannot use solutions that are too abstract for your colleagues, or for your managers. Companies value replaceability, because it is less fragile and helps to keep developer salaries lower than they might be otherwise. (In theory, you could have a team full of SICP type developers, which would allow them to work smarter, and yet the company would feel safe. In practice, companies can’t recognize this type and don’t appreciate it, so this is not going to happen.)
Again, probably the best position for a SICP type developer in a company would be to develop some library that the rest of the company would use. That is, a subproject of a limited size that the developer can do alone, so they are not limited in the techniques they use, as long as the API is comprehensible. Ah, but before you are given such opportunity, you usually have to prove yourself in the opposite type of work.
Sometimes I feel like having a university for software developers just makes them overqualified for the market. A vocational school focusing on the current IT hype would probably make most companies more happy. Also the developers, though probably only in short term, before a new hype comes and they face the competition of a new batch of vocational school graduates trained for the new hype. A possible solution for the vocational school would be to also offer retraining courses for their former students, like three or six months to become familiar with the new hype.
Rats and EAs should help with the sanity levels in other communities
Consider politics. You should take your political preferences/aesthetics, go to the tribes that are based on them, and help them be more sane. In the politics example, everyone’s favorite tribe has failure modes, and it is sort of the responsibility of the clearest-headed members of that tribe to make sure that those failure modes don’t become the dominant force of that tribe.
Speaking for myself, having been deeply in an activist tribe before I was a rat/EA, I regret I wasn’t there to help the value-aligned and clear-headed over the last few years while some of that tribe’s worst pathologies made gains. Now it seems almost too late for them.
Actionably, I want you to
Write for journals, forums, blogospheres, zines outside of rat and EA.
Dump time into tribes that might not be the state of the art in sanity, find the most sane people there, and find ways to support them.
I speak not (well, not entirely) from my cognitive dissonance at having abandoned an aesthetic I still have feelings for. I think
Tribes besides ours are what make up the overall sanity waterline
It’s ok to set aside humility and imposter syndrome and say “I can actionably be a resource of sanity for someone else”, even tho you personally think you have a lot of work to do at getting less wrong yourself. I would say the opposite of the “affix your mask before helping others” comic strip: find synergies between mentoring others in the art and continuing to master the art yourself.
We basically want every tribe to believe true things and think clearly about their values. Yes, I’m obviously concerned that this will lead to some of my fellow rats taking my advice, applying it to a political aesthetic I find barbaric, and helping that political aesthetic win—I think this concern is basically fine because on net I expect more true beliefs and clear thinking about values to make the meaning of winning for each tribe converge on something that isn’t zero-sum.
I should also mention that I expect an externality from this effort to be an increase in the intrarat / intraEA intellectual diversity.
But what if that makes my tribe lose the political battle?
I mean, if rationality actually helped win political fights, by the power of evolution we already would have been all born rational...
1. Evolution does not magically get from A to B instantly.
2. Evolution does not necessarily care about X for many values of X.
This can include: winning political fights, whether or not nukes are built and many other things.
Claims—thoughts on chapter eight of Craft of Research
Broadly, the two kinds of claims are conceptual and practical.
Conceptual claims ask readers not to ask, but to understand. The flavors of conceptual claim are as follows:
Claims of fact or existence
Claims of definition and classification
Claims of cause and consequence
Claims of evaluation or appraisal
There’s essentially one flavor of practical claim
Claims of action or policy.
If you read between the lines, you might notice that a kind of claim of fact or cause/consequence is that a policy works or doesn’t work to bring about some end. In this case, we see that practical claims deal in ought or should. There is a difference, perhaps subtle perhaps not, between “X brings about Y” and “to get Y we ought to X”.
Readers expect a claim to be specific and significant. You can evaluate your claim along these two axes.
To make a claim specific, you can use precise language and explicit logic. Usually, precision comes at the cost of a higher word count. To gain explicitness, use words like “although” and “because”. Note some fields might differ in norms.
You can think of significance of a claim as the quantity it asks readers to change their mind, or I suppose even behavior.
Avoid arrogance.
Two ways of avoiding arrogance are acknowledging limiting conditions and using hedges to limit certainty.
Don’t run aground: there are innumerable caveats that you could think of, so it’s important to limit yourself only to the most relevant ones or the ones that readers would most plausibly think of. Limiting certainty with hedging is given by example of Watson and Crick, publishing what would become a high-impact result, “We wish to suggest … in our opinion … we believe … Some … appear”
It is not obvious how to walk the line between hedging too little and hedging too much.
This may be context-dependent. Different countries probably have different cultural norms. Norms may differ for higher-status and lower-status speakers. Humble speech may impress some people, but others may perceive it as a sign of weakness. Also, is your audience fellow scientists or are you writing a popular science book? (More hedging for the former, less hedging for the latter.)
notes (from a very jr researcher) on alignment training pipeline
Training for alignment research is one part competence (at math, cs, philosophy) and another part having an inside view / gears-level model of the actual problem. Competence can be outsourced to universities and independent study, but inside view / gears-level model of the actual problem requires community support.
A background assumption I’m working with is that training as a longtermist is not always synchronized with legible-to-academia training. It might be the case that jr researchers ought to publication-maximize for a period of time even if it’s at the expense of their training. This does not mean that training as a longtermist is always or even often orthogonal to legible-to-academia training, it can be highly synchronized, but it depends on the occasion.
It’s common to query what relative ratio should be assigned to competence building (textbooks, exercises) vs. understanding the literature (reading papers and alignment forum), but perhaps there is a third category- honing your threat model and theory of change.
I spoke with a sr researcher recently who roughly said that a threat model with a theory of change is almost sufficient for an inside view / gears-level model. I’m working from the theory that honed threat models and your theory of change are important to calculate interventions. See Alice and Bob in Rohin’s faq.
I’ve been trying by doing exercises with a group of peers weekly to hone my inside view / gears-level model of the actual problem. But the sr researcher i spoke to said mentorship trees of 1:1 time, not exercises that jrs can just do independently or in groups, is the only way it can happen. This is troublesome to me, as the bottleneck becomes mentors’ time. I’m not so much worried about the hopefully merit-based process of mentors figuring out who’s worth their time, as I am about the overall throughput. It gets worse though- what if the process is credentialist?
Take a look at the Critch quote from the top of Rohin’s faq:
Is he implicitly saying that he offloads some of the filtering work to admissions people at top schools? Presumably people from non-top schools are also emailing him, but he doesn’t mention them.
I’d like to see a claim that admissions people at top schools are trustworthy. No one has argued this to my knowledge. I think sometimes the movement falls back on status games, unless there is some intrinsic benefit to “top schools” (besides building social power/capital) that everyone is aware of. (Indeed if someone’s argument is that they identified a lever that requires a lot of social power/capital, then they can maybe put that top school on their resume to use, but if the lever is strictly high quality useful research (instead of say steering a federal government) this doesn’t seem to apply).
I don’t think Critch’s saying that the best way to get his attention is through cold emails backed up by credentials. The whole post is about him not using that as a filter to decide who’s worth his time but that people should create good technical writing to get attention.
Critch’s written somewhere that if you can get into UC Berkeley, he’ll automatically allow you to become his student, because getting into UC Berkeley is a good enough filter.
Where did he say that? Given that he’s working at UC Berkeley I would expect him to treat UC Berkeley students preferentially for reasons that aren’t just about UC Berkeley being able to filter.
It’s natural that you can sign up for one of the classes he teaches at UC Berkeley by being a student of UC Berkeley.
Being enrolled into MIT might be just as hard as being enrolled into UC Berkeley but it doesn’t give you the same access to courses taught at UC Berkeley by it’s faculty.
http://acritch.com/ai-berkeley/
and also
Okay, he does speak about using Berkeley as a filter but he doesn’t speak about taking people as his student.
It seems about helping people in UC Berkeley to connect with other people in UC Berkeley.
Methods, famously, includes the line “I am a descendant of the line of Bacon”, tracing empiricism to either Roger (13th century) or Francis (16th century) (unclear which).
Though a cursory wikiing shows an 11th century figure providing precedents for empiricism! Alhazen or Ibn al-Haytham worked mostly optics apparently but had some meta-level writings about the scientific method itself. I found this shockingly excellent quote
Should we do more to celebrate Alhazen as an early rationalist?
New discord server dedicated to multi-multi delegation research
DM me for invite if you’re at all interested in multipolar scenarios, cooperative AI, ARCHES, social applications & governance, computational social choice, heterogeneous takeoff, etc.
(side note I’m also working on figuring out what unipolar worlds and/or homogeneous takeoff worlds imply for MMD research).
Questions and Problems—thoughts on chapter 4 of Craft of Doing Research
Last time we discussed the difference between information and a question or a problem, and I suggested that the novelty-satisfied mode of information presentation isn’t as good as addressing actual questions or problems. In chapter 3 which I have not typed up thoughts about, A three step procedure is introduced
Topic: “I am studying …”
Question: ”… because I want to find out what/why/how …”
Significance: ”… to help my reader understand …” As we elaborate on the different kinds of problems, we will vary this framework and launch exercises from it.
The basic feedback loop introduced in this chapter relates practical with conceptual problems and relates research questions with research answers.
What should we do vs. what do we know—practical vs conceptual problems
Opposite eachother in the loop are practical problems and conceptual problems. Practical problems are simply those which imply uncertainty over decisions or actions, while conceptual problems are those which only imply uncertainty over understanding. Concretely, your bike chain breaking is a practical problem because you don’t know where to get it fixed, implying that the research task of finding bike shops will reduce your uncertainty about how to fix the bike chain.
Conditions and consequences
The structure of a problem is that it has a condition (or situation) and the (undesirable) consequences of that condition. The consequences-costs model of problems holds both for practical problems and conceptual problems, but comes in slightly different flavors. In the practical problem case, the condition and costs are immediate and observed. However, a chain of “so what?” must be walked.
One person’s cost may be another person’s condition, so when stating the cost you ought to imagine a socratic “so what?” voice, forcing you to articulate more immediate costs until the socratic voice has to really reach in order to say that it’s not a real cost.
The conceptual problem case is where intangibles play in. The condition in that case is always the simple lack of knowledge or understanding of something. The cost in that case is simple ignorance.
Modus tollens
A helpful exercise is if you find yourself saying “we want to understand x so that we can y”, try flipping to “we can’t y if we don’t understand x”. This sort of shifts the burden on the reader to provide ways in which we can y without understanding x. You can do this iteratively: come up with _z_s which you can’t do without y, and so on.
Pure vs. applied research
Research is pure when the significance stage of the topic-question-significance frame refers only to knowing, not to doing. Research is applied when the significance step refers to doing. Notice that the question step, even in applied research, refers to knowing or understanding.
Connecting research to practical consequences
You might find that the significance stage is stretching a bit to relate the conceptual understanding gained from the question stage. Sometimes you can modify and add a fourth step to the topic-question-significance frame and make it into topic-conceptual question-conceptual significance-possible practical application. Splitting significance into two helps you draw reasonable, plausible applications. A claimed application is a stretch when it is not plausible. Note: the authors suggest that there is a class of conceptual papers in which you want to save practical implications entirely for the conclusion, that for a certain kind of paper practical applications do not belong in the introduction.
AI safety
One characterisitic of AI safety that makes it difficult both to do and interface with is the chains of “so what” are often very long. The path from deconfusion research to everyone dying or not dying feels like a stretch if not done carefully, and has a lot of steps when done carefully. As I mentioned in my last post, it’s easy to get sucked into the “novel information for it’s own sake” regime at least as a reader. More practical oriented approaches are perhaps those that seek new regimes for how to even train models, and the “so what?” is answered “so we have dramatically less OODR-failures” or something. The condition-costs framework seems really beneficial for articulating alignment agendas and directions.
Misc
“Researchers often begin a project without a clear idea of what the problem even is.”
Look for problems as you read. When you see contradictions, inconsistencies, incomplete explanations tentatively assume that readers would or should feel the same.
Ask not “Can I solve it?” but “will my readers think it ought to be solved?”
“Try to formulate a question you think is worth answering, so that down the road, you’ll know how to find a problem others think is worth solving.”
Positive and negative longtermism
I’m not aware of a literature or a dialogue on what I think is a very crucial divide in longtermism.
In this shortform, I’m going to take a polarity approach. I’m going to bring each pole to it’s extreme, probably each beyond positions that are actually held, because I think median longtermism or the longtermism described in the Precipice is a kind of average of the two.
Negative longtermism is saying “let’s not let some bad stuff happen”, namely extinction. It wants to preserve. If nothing gets better for the poor or the animals or the astronauts, but we dodge extinction and revolution-erasing subextinction events, that’s a win for negative longtermism.
In positive longtermism, such a scenario is considered a loss. From an opportunity cost perspective, the failure to erase suffering or bring to agency and prosperity to
1e1000
comets and planets hurts literally as bad as extinction.Negative longtermism is a vision of what shouldn’t happen. Positive longtermism is a vision of what should happen.
My model of Ord says we should lean at least 75% toward positive longtermism, but I don’t think he’s an extremist. I’m uncertain if my model of Ord would even subscribe to the formation of this positive and negative axis.
What does this axis mean? I wrote a little about this earlier this year. I think figuring out what projects you’re working on and who you’re teaming up with strongly depends on how you feel about negative vs. positive longtermism. The two dispositions toward myopic coalitions are “do” and “don’t”. I won’t attempt to claim which disposition is more rational or desirable, but explore each branch
When Alice wants future
X
and Bob wants futureY
, but if they don’t defeat the adversary Adam they will be stuck with future0
(containing great disvalue), Alice and Bob may set aside their differences and choose form a myopic coalition to defeat Adam or not.Form myopic coalitions. A trivial case where you would expect Alice and Bob to tend toward this disposition is if
X
andY
are similar. However, ifX
andY
are very different, Alice and Bob must each believe that defeating Adam completely hinges on their teamwork in order to tend toward this disposition, unless they’re in a high trust situation where they each can credibly signal that they won’t try to get a head start on theX
vs.Y
battle until0
is completely ruled out.Don’t form myopic coalitions. A low trust environment where Alice and Bob each fully expect the other to try to get a head start on
X
vs.Y
during the fight against0
would tend toward the disposition of not forming myopic coalitions. This could lead to great disvalue if a project against Adam can only work via a team of Alice and Bob.An example of such a low-trust environment is, if you’ll excuse political compass jargon, reading bottom-lefts online debating internally the merits of working with top-lefts on projects against capitalism. The argument for coalition is that capitalism is a formiddable foe and they could use as much teamwork as possible; the argument against coalition is historical backstabbing and pogroms when top-lefts take power and betray the bottom-lefts.
For a silly example, consider an insurrection against broccoli. The ice cream faction can coalition with the pizzatarians if they do some sort of value trade that builds trust, like the ice cream faction eating some pizza and the pizzatarians eating some ice cream. Indeed, the viciousness of the fight after broccoli is abolished may have nothing to do with the solidarity between the two groups under broccoli’s rule. It may or may not be the case that the ice cream faction and the pizzatarians can come to an agreement about best to increase value in a post-broccoli world. Civil war may follow revolution, or not.
Now, while I don’t support long reflection (TLDR I think a collapse of diversity sufficient to permit a long reflection would be a tremendous failure), I think elements of positive longtermism are crucial for things to improve for the poor or the animals or the astronauts. I think positive longtermism could outperform negative longtermism when it comes to finding synergies between the extinction prevention community and the suffering-focused ethics community. However, I would be very upset if I turned around in a couple years and positive longtermists were, like, the premiere face of longtermism. The reason for this is once you admit positive goals, you have to deal with everybody’s political aesthetics, like a philosophy professor’s preference for a long reflection or an engineer’s preference for moar spaaaace or a conservative’s preference for retvrn to pastorality or a liberal’s preference for intercultural averaging. A negative goal like “don’t kill literally everyone” greatly lacks this problem. Yes, I would change my mind about this if 20% of global defense expenditure was targeted at defending against extinction-level or revolution-erasing events, then the neglectedness calculus would lead us to focus the by comparison smaller EA community on positive longtermism.
The takeaway from this shortform should be that quinn thinks negative longtermism is better for forming projects and teams.
The audience models of research—thoughts on Craft of Doing Research chapter 2
Before considering the role you’re creating for your reader, consider the role you’re creating for yourself. Your broad options are the following
I’ve found some new and interesting information—I have information for you
I’ve found a solution to an important practical problem—I can help you fix a problem
I’ve found an answer to an important question—I can help you understand something better
The authors recommend assuming one of these three. There is of course a wider gap between information and the neighborhood of problems and questions than there is between problems and questions! Later on in chapter four the authors provide a graph illustrating problems and questions:
Practical problem -> motivates -> Research question -> defines -> Conceptual/research problem
. Information, when provided mostly for novelty, however, is not in this cycle. Information can be leveled at problems or questions, plays a role in providing solutions or answers, but can also be for “its own sake”.I’m reminded of a paper/post I started but never finished, on providing a poset-like structure to capabilities. I thought it would be useful if you could give a precise ordering on a set of agents, to assign supervising/overseeing responsibilities. Looking back, providing this poset would just be a cool piece of information, effectively: I wasn’t motivated by a question or problem so much as “look at what we can do”. Yes, I can post-hoc think of a question or a problem that the research would address, but that was not my prevailing seed of a reason for starting the project. Is the role of the researcher primarily a writing thing, though, applying mostly to the final draft? Perhaps it’s appropriate for early stages of the research to involve multi-role drifting, even if it’s better for the reader experience if you settle on one role in the end.
Additionally, it occurs to me that maybe “I have information for you” mode just a cheaper version of the question/problem modes. Sometimes I think of something that might lead to cool new information (either a theory or an experiment), and I’m engaged moreso by the potential for novelty than I am by the potential for applications.
I think I’d like to become more problem-driven. To derive possibilities for research from problems, and make sure I’m not just seeking novelty. At the end of the day, I don’t think these roles are “equal” I think the problem-driven role is the best one, the one we should aspire to.
The three reader roles complementing the three writer roles are
Entertain me
Help me solve my practical problem
Help me understand something better
It’s basically stated that your choice of writer role implies a particular reader role, 1 mapping to 1, 2 mapping to 2, and 3 mapping to 3.
Role 1 speaks to an important difficulty in the x-risk, EA, alignment community; which is how not to get drawn into the phenomenal sensation of insight when something isn’t going to help you on a problem. At my local EA meetup I sometimes worry that the impact of our speaker events is low, because the audience may not meaningfully update even though they’re intellectually engaged. Put another way, intellectual engagement can be goodhartable, the sensation of insight can distract you from your resolve to shatter your bottlenecks and save the world if it becomes an end itself. Should researchers who want to be careful about this avoid the first role entirely? Should the alignment literature look upon the first reader role as a failure mode? We talk about a lot of cool stuff, it can be easy to be drawn in by the cool factor like some of the non-EA rationalists I’ve met at meetups.
I’m not saying reader role number two absolutely must dominate, because it can diverge from deconfusion which is better captured by reader role number three.
Division of labor between reader and writer, writer roles do not always imply exactly one reader role
Isn’t it the case that deconfusion/writer role three research can be disseminated to practical (as opposed to theoretical) -minded people, and then those people turn question-answer into problem-solution? You can write in the question-answer regime, but there may be that (rare) reader who interprets it in the problem-solution regime! This seems to be an extremely good thing that we should find a way to encourage. In general reading the drifts across multiple roles seems like the most engaged kind of reading.
Would there be a way of estimating how many people within the amazon organization are fanatical about same day delivery ratio against how many are “just working a job”? Does anyone have a guess? My guess is that an organization of that size with a lot of cash only needs about 50 true fanatics, the rest can be “mere employees”. What do yall think?
I can’t really think of any research bearing on this, and unclear how you’d measure it anyway.
One way to go might be to note that there is a wide (and weird) variance between the efficiency of companies: market pressures are slack enough that two companies doing as far as can be told the exact same thing in the same geographic markets with the same inputs might be almost 100% different (I think was the range in the example of concrete manufacturing in one paper I read); a lot of that difference appears to be explainable by the quality of the management, and you can do randomized experiments in management coaching or intensity of management and see substantial changes in the efficiency of a company (Bloom—the other one—has a bunch of studies like this). Presumably you could try to extrapolate from the effects of individuals to company-wide effects, and define the goal of the ‘fanatical’ as something like ‘maintaining top-10% industry-wide performance’: if educating the CEO is worth X percentiles and hiring a good manager is worth 0.0Y percentiles and you have such and such a number of each, then multiply out to figure out what will bump you 40 percentiles from an imagined baseline of 50% to the 90% goal.
Another argument might be a more Fermi estimate style argument from startups. A good startup CEO should be a fanatic about something, otherwise they probably aren’t going to survive the job. So we can assume one fanatic at least. People generally talk about startups beginning to lose the special startup magic of agility, focus, and fanaticism at around Dunbar’s number level of employees like 300, or even less (eg Amazon’s two-pizza rule which is I guess 6 people?). In the ‘worst’ case that the founder has hired 0 fanatics, that implies 1 fanatic can ride herd over no more than ~300 people; in the ‘best’ case that he’s hired dozens, then each fanatic can only cover for more like 2 or 3 non-fanatics. I’m not sure how we should count Amazon’s employees: do the warehouse workers, often temps, really count? They are so micro-managed and driven by the warehouse operation that they hardly seem even relevant to the question. I can’t quickly find that number, just totals, but let’s say there’s like 100,000 non-warehouse-ish employees; at a 300:1 ratio, you’d need 333, and at 3:1, 33,333. The former might be feasible, the latter not so much. (And would explain why Amazon.com seems to be a gradually degrading shopping experience—so many ads! Why are there ads getting in my way when I’m trying to give you my money already, Amazon!)
I’m not sure “fanatical” is well-defined enough to mean anything here. I doubt there are any who’d commit terrorist acts to further same-day delivery. There are probably quite a few who believe it’s important to the business, and a big benefit for many customers.
You’re absolutely right that a lot of employees and contractors can be “mere employees”, not particularly caring about long-term strategy, customer perception, or the like. That’s kind of the nature of ALL organizations and group behaviors, including corporate, government, and social groupings. There’s generally some amount of influencers/selectors/visionaries, some amount of strategists and implementers, and a large number of followers. Most organizations are multidimensional enough that the same people can play different roles on different topics as well.
I don’t think it needs any true fanatics. It just needs incentives.
This isn’t to say there won’t be fanatics anyway. There probably aren’t many things that nobody can get fanatical about. This is even more true if they’re given incentives to act fanatical about it.
Sure, but the incentive structure needs continual maintenance to keep it aligned with or pointing at the goal, which naturally leads to the questions of how many people are needed to keep the structure pointing at the goal, and what the motivation of those people will be.
We need a name for the following heuristic, I think, I think of it as one of those “tribal knowledge” things that gets passed on like an oral tradition without being citeable in the sense of being a part of a literature. If you come up with a name I’ll certainly credit you in a top level post!
I heard it from Abram Demski at AISU′21.
Suppose you’re either going to end up in world A or world B, and you’re uncertain about which one it’s going to be. Suppose you can pull lever LA which will be 100 valuable if you end up in world A, or you can pull lever LB which will be 100 valuable if you end up in world B. The heuristic is that if you pull LA but end up in world B, you do not want to have created disvalue, in other words, your intervention conditional on the belief that you’ll end up in world A should not screw you over in timelines where you end up in world B.
This can be fully mathematized by saying “if most of your probability mass is on ending up in world A, then obviously you’d pick a lever L such that V(L|A) is very high, just also make sure that V(L|B)>=0 or creates an acceptably small amount of disvalue.”, where V(L|A) is read “the value of pulling lever L if you end up in world A”
Why are you specifying 100 or 0 value, and using fuzzy language like “acceptably small” for disvalue?
Is this based on “value” and “disvalue” being different dimensions, and thus incomparable? Wouldn’t you just include both in your prediction, and run it through your (best guess of) utility function and pick highest expectation, weighted by your probability estimate of which universe you’ll find yourself in?
100 and 0 in this context make sense. Or at least in my initial reading: arbitrarily-chosen values that are in a decent range to work quickly with (akin to why people often work in percentages instead of 0..1)
It is—I’m going to say “often”, although I am aware this is suboptimal phrasing—often the case that you are confident in the sign of an outcome but not the magnitude of the outcome.
As such, you can often end up with discontinuities at zero.
Dropping the entire probability distribution of outcomes through your utility function doesn’t even necessarily have a closed-form result. In a universe where computation itself is a cost, finding a cheaper heuristic (and working through if said heuristic has any particular basis or problems) can be valuable.
The heuristic in the grandparent comment is just what happens if you are simultaneously very confident in the sign of positive results, and have very little confidence in the magnitude of negative results.
It is often the case that you are confident in the sign of an outcome but not the magnitude of the outcome.
This heuristic is what happens if you are simultaneously very confident in the sign of positive results, and have very little confidence in the magnitude of negative results.
I’m not sure I understand. If the lever is +100 in world A and −90 in world B, it seems like a good bet if you don’t know which world you’re in. Or is that what you mean by “acceptably small amount of disvalue”?
Obviously there are considerations downstream of articulating this, one is that when P(A)>P(B) but V(LA|A)<V(LB|B) so it’s reasonable to hedge on ending up in world B even though it’s not strictly more probable than ending up in world A.
critiques and complaints
I think one of the most crucial meta skills i’ve developed is honing my sense of who’s criticizing me vs. who’s complaining.
A criticism is actionable, implicitly often it’s from someone who wants you to win. A complaint is when you can’t figure out how you’d actionably fix something or improve based on what you’re being told.
This simple binary story is problematic. It can empower you to ignore criticism you don’t like by providing a set of excuses, if you’re not careful. Sometimes it’s operationally impossible to parse out a criticism that runs so deep that it unsettles your premises from a complaint! I think people who are building things can be excused for ignoring advice if the only actionable way of accepting that advice is to completely overhaul their approach, for reasons of focus and other logistical concerns. If it’s that rare time in a project when you are going back to the drawing board and starting over, that’s definitely time to mine complaints for useful insight.
Related: the legend of the amazon customer in the 90s who was insatiably filling out customer feedback forms, to the point where 2000s or 2010s amazon named a boardroom after him. The idea was that this guy helped them improve a lot—surely it would have been easy to dismiss him as a complainer, but they didn’t, they found actionable advice within the complaints. I think your ability to take something that isn’t intended to help you, isn’t actionable on it’s face, and mining it for actionable insight can be very important. But for filtering, for attention, for sanity, dismissing something quickly because it doesn’t seem like it can help you or the project improve can be valid as well.
hmu for a haskell job in decentralized finance. Super fun zero knowledge proof stuff, great earning to give opportunity.
Are shelling points the occam’s razor of mechanism design?
Intuitively I think simplicity is a good explanation for a solution being converged upon.
Does anyone have any crisp examples that violate the schelling point—occam’s razor correspondence?
Anchoring effect is enough for a Schelling point, it doesn’t have to be simple solution.
For instance a new nation that wants to move away from dictatorship is automatically going to build a democracy with multiple independent arms (legislature, judiciary, executive), a constitution, periodic elections of representatives, etc.
They could choose to try a direct democracy or change the term from 5 years to 1 year, they could choose to have public elections for the judiciary too, or any other deviation from how democracies usually run, but they won’t. Fear of the unknown + no creativity or motivation will be sufficient from them to copy existing countries’ democratic structure.
Disvalue via interpersonal expected value and probability
My deontologist friend just told me that treating people like investments is no way to live. The benefits of living by that take are that your commitments are more binding, you actually do factor out uncertainty, because when you treat people like investments you always think “well someday I’ll no longer be creating value for this person and they’ll drop me from their life”. It’s hard to make long term plans, living like that.
I’ve kept friends around out of loyalty to what we shared 5-10 years ago while questioning an expected value theory or probability theory based value prop. So I’m not, like, super guilty of this or anything. But overall I do take expected value theory and probability theory into interpersonal matters, and I don’t object when others do the same for me. Though it’s hard sometimes, I think it’s basically fine if someone drops me because I’m not adding value for them. An edge case in the opposite direction is that you’re obligated to build deep friendships with every acquaintance, which is also a little silly. But a sweet spot, like a marriage or other way of teaming up (like for a project) might meaningfully call for a suspension of expected value theory and probability theory.
One thing to be careful about in such decisions—you don’t know your own utility function very precisely, and your modeling of both future interactions and your value from such are EXTREMELY lossy.
The best argument for deontological approaches is that you’re running on very corrupt hardware, and rules that have evolved and been tested over a long period of time are far more trustworthy than your ad-hoc analysis which privileges obvious visible artifacts over more subtle (but often more important) considerations.
Imo choosing to disconnect from people who are no longer providing any value to you is just a healthy thing to do, even a deontologist should agree with that.
I may refine this into a formal bounty at some point.
I’m curious if censorship would actually work in the context of blocking deployment of superpowerful AI systems. Sometimes people will mention “matrix multiplication” as a sort of goofy edge case, which isn’t very plausible, but that doesn’t mean there couldn’t be actual political pressure to censor it. A more plausible example would be attention. Say the government threatens soft power against arxiv if they don’t pull attention is all you need, or threatens soft power against harvard if their linguistics department doesn’t pull the pytorch-annotated attention is all you need. By this point, it goes without saying that black hat hackers writing down the equations would face serious consequences if they got caught. Now instead of attention, imagine some more galaxy-brained paper or insight that gets published in 2028 and is an actual missing ingredient to advanced AI (assuming you’re not one of the people who think attention is all you need already is that paper).
While it’s certainly a research project to look at pros and cons of this approach to safety from AI, I think before that we need someone to profile efficacy of technological censorship through history to come at an estimate of how well this would work, i.e., how well it would actually slow or stop the propagation of this information, how well it would slow or stop the deployment of systems based on that information.
My guess at who the ideal person to execute on this bounty would be some patent law nerd, tho I’m sure a variety of types of nerd could do a great job.
You’ll need a govt body full of people who are aligned in their thinking, no one should defect.
Also Yudkowsky’s response to this would prolly be that it isn’t enough to censor the first time the idea is created, someone else will just discover another (or the same) path to AGI independently. See pivotal act.
any literature on estimates of social impact of businesses divided by their valuations?
the idea that dollars are a proxy for social impact is neat, but leaves a lot of room for goodhart and I think it’s plausible that they diverge entirely in cases. It would be useful to know, if possible to know, what’s going on here.
there’s paid tools that estimate this, probably poorly
thinking about this comment
Why have I heard about Tyson investing into lab grown, but I haven’t heard about big oil investing in renewable?
Tyson’s basic insight here is not to identify as “an animal agriculture company”. Instead, they identify as “a feeding people company”. (Which happens to align with doing the right thing, conveniently!)
It seems like big oil is making a tremendous mistake here. Do you think oil execs go around saying “we’re an oil company”? When they could instead be going around saying “we’re a powering stuff” company. Being a powering stuff company means you have fuel source indifference!
I mean if you look at all the money they had to spend on disinformation and lobbying, isn’t it insultingly obvious to say “just invest that money into renewable research and markets instead”?
Is there dialogue on this? Also, have any members of “big oil” in fact done what I’m suggesting, and I just didn’t hear about it?
Gonna cc to ea forum shortform
Yes, this is more about you not hearing about it.
Shell Has A Bigger Clean Energy Plan Than You Think — CleanTechnica Interview
BP Bets Future on Green Energy, but Investors Remain Wary
It seems that Tyson invested 150 million into a fund for new food solutions.
In contrast to that Exxon invested 600 million in algae biofuels back in 2009 and more afterward.
I do vaguely remember hearing of big oil doing that, though perhaps not as much as meat producers do with lab grown meat, try looking into it.
1. Might be a little bit harder in that industry.
2. Are they in charge (of that)? Who chose them?
you’re most likely right about it being harder in the industry!
I don’t think they need permission or an external mandate to do the right thing!
The main problem is that prior investment into the oil method of powering stuff doesn’t translate into having a comparative advantage in a renewable way of powering stuff. They want a return on their existing massive investments.
While this looks superficially like a sunk cost fallacy, it isn’t. If a comparatively small investment (mere billions) can ensure continued returns on their trillions of sunk capital for another decade, it’s worth it to them.
Investment into renewable powering stuff would require substantially different skill sets in employees, in very different locations, and highly non-overlapping investment. At best, such an endeavour would constitute a wholly owned subsidiary that grows while the rest of the company withers. At worst, a parasite that hastens the demise of the parent while eventually failing in the face of competition anyway.
I’ve had a background assumption in my interpretation of and beliefs about reward functions for as long as I can remember (i.e. since first reading the sequences), that I suddenly realized I don’t believe is written down. Over the last two years I’ve gained experience writing coq sufficient to inspire a convenient way of framing it.
Computational vs axiomatic reward functions
Computational vs axiomatic in proof engineering
A proof engineer calls a proposition computational if it’s proof can be broken down into parts.
For example,
a + (b + c) = (a + b) + c
is computational because you can think of it’s proof as the application of the associativity lemma then the application of something called a “refl”, the fundamental termination of a proof involving equality. Passing around the associativity lemma is in a sense passing around it’s proof, which assuminga
is inductive (takenat
; zero and successor) is an application ofnat
’s induction principle, unpacking the recursive definition of+
, etc.In other words, if my adversary asks “why is
a + (b + c) = (a + b) + c
I can show them; I only have to make sure they agree to the fundamental definitions ofnat
and+ : nat -> nat -> nat
, the rest I can compel them to believe.On the flip side, consider function extensionality, or
f = g <-> forall x, f x = g x
, not provable because we do not know that the domain off
(which equals the domain ofg
) is countable, to name but one scenario. Because they can’t prove it, theories “admit function extensionality as an axiom” from time to time.In other words, if I invoke function extensionality in a proof, and my adversary has agreed to the basic type and function definitions, they remain entitled to reject my proof because if they ask why I believe function extensionality the best I can do is say “because I declared it on line 7”.
We do not call reasoning involving axioms computational. Instead, the discourse has sort of become poisoned by the axiom; it’s verificational properties have become weaker. (Intuitively, I can declare on line 7 anything I want; the risk of proving something that is actually false increases a great deal with each axiom I declare).
Apocryphally, a lecturer recalled a meeting perhaps of the univalent foundations group at IAS, when homotopy type theory (HoTT) was brand new (HoTT is based on something called univalence, which is about reasoning on type equalities in arbitrary “universes” (“kinds” for the haskell programmer)). In HoTT 1.0, univalence relied on an axiom (done carefully of course, to minimize the damage of the poison) and Per Martin-Lof is said to have remarked “it’s not really type theory if there’s an axiom”. HoTT 2.0 called cubical type theory repairs this, which is why cubical tt is sometimes called computational tt.
AIXI-like and AIXI-unlike AGIs
If the space of AGIs can be carved into AIXI-like and AIXI-unlike with respect to goals, clearly AIXI-like architectures have goals imposed on them axiomatically by the programmer. The complement of course is where the reward function is computational; decomposable.
See the NARS literature of Wang et. al. for something at least adjacent to AIXI-unlike—reasoning about NARS emphasizes that reward functions can be computational to an extent, but “bottom out” at atoms eventually. Still, NARS goals are computational to a far greater degree than AIXI-likes.
Conjecture: humans are AIXI-unlike AGIs
This should be trivial: humans can decompose their reward functions in ways richer than “because god said so”.
Relation to mutability???
If the space of AGIs can be carved into AIXI-like and human-like with respect to goals, does the computationality question help me reason about modifying my own reward function? Intuitively, AIXI’s axiomatic goal corresponds to immutability. However, I don’t think there’s a for-free implication that AIXI-unlikes get self-modification for-free. More work needed.
Position of this post in my overall reasoning
In general, my basic understanding that the AGI space can be divided into what I’ve called AIXI-like and AIXI-unlike with respect to how reward functions are reasoned about, and that computationality (anaxiomaticity vs axiomaticity?) is the crucial axis to view, is deeply embedded in my assumptions. Maybe writing it down will make eventually changing my mind about this easier: I’m uncertain just how conventional my belief/understanding is here.
I should be more careful not to imply I think that we have solid specimens of computational reward functions; more that I think it’s a theoretically important region of the space of possible minds, and might factor in idealizations of agency
capabilities-prone research.
I come to you with a dollar I want to spend on AI. You can allocate
p
pennies to go to capabilities and100-p
pennies to go to alignment, but only if you know of a project that realizes that allocation. For example, we might think that GAN research setsp = 98
(providing 2 cents to alignment) while interpretability research setsp = 10
(providing 90 cents to alignment).Is this remotely useful? This is a really rough model (you might think it’s more of a venn diagram and that this model doesn’t provide a way of reasoning about the double counting problem).
a task: rate research areas, even whole agendas, with such value
p
. Many people may disagree about my example assignments to GANs and interpretability, or think both of those are too broad.What are some alternatives to the splitting a dollar intuition?
To say something is capabilities-prone is less to say a dollar has been cleanly split, and more to say that there are some dynamics that sort of tend toward or get pushed toward different directions. Perhaps I want some sort of fluid metaphor instead.
Question your argument as your readers will—thoughts on chapter 10 of Craft of Research
Three predictable disagreements are
There are causes in addition to the one you claim
What about these counterexamples?
I don’t define X as you do, to me X means...
There are roughly two kinds of queries readers will have about your argument
intrinsic soundness—“challenging the clarity of a claim, relevance of reasons, or quality of evidence”
extrinsic soundness—“different ways of framing the problem, evidence you’ve overlooked, or what others have written on the topic.” The idea is to anticipate, acknowledge, and respond to both kinds of questions. This is the path to making an argument that readers will trust and accept.
Voicing too many hypothetical objections up front can paralyze you. Instead, what you should do before anything else is focus on what you want to say. Give that some structure, some meat, some life. Then, an important exercise is to imagine readers’ responses to it.
I think cleaving these into two highly separated steps is an interesting idea, doing this with intention may be a valuable exercise next time I’m writing something.
The authors provide some questions about your problem from a possible reader:
Why do you think there’s a problem at all?
Have you properly defined the problem?
Is your solution practical or conceptual?
Have you stated your claim too strongly?
Why is your practical/conceptual solution better than others?
Then, they provide some questions about your support from a possible reader.
“I want to see a different kind of evidence” i.e. hard numbers over anecdotes / real people over cold numbers
“It isn’t accurate”
“It isn’t precise enough”
“It isn’t current”
“It isn’t representative”
“It isn’t authoritative”
“You need more evidence”
It builds credibility to play defense: to recognize your own argument’s limitations. It builds even more credibility to play offense: to explore alternatives to your argument and bring them into your reasoning. If you can, you might develop those alternatives in your own imagination, but more likely you’d like to find alternatives in your sources.
What is the perfect amount of objections to acknowledge? Acknowledging too many can distract readers from the core of your argument, while acknowledging too few is a signal of laziness or even disrespect. You need to narrow your list of alternatives or objections by subjecting them to the following priorities
It is wise to build up good faith by acknowledging questions you can’t answer. Concessions are often interpreted as positive signals by the reader.
It is important for your responses to acknowledgments to be subordinate to your main point, or else the reader will miss the forest for the trees.
Remember to make an intentional decision about how much credence to give to an objection or alternative. Weaker ones imply weaker credences, imply less effort in your acknowledgment and response.
there’s a gap in my inside view of the problem, part of me thinks that capabilities progress such as out-of-distribution robustness or the 4 tenets described in open problems in cooperative ai is necessary for AI to be transformative, i.e. a prereq of TAI, and another part of me that thinks AI will be xrisky and unstable if it progresses along other aspects but not along the axis of those capabilities.
There’s a geometry here of transformative / not transformative cross product with dangerous not dangerous.
To have an inside view I must be able to adequately navigate between the quadrants with respect to outcomes, interventions, etc.
If something can learn fast enough, then it’s out-of-distribution performance won’t matter as much. (OOD performance will still matter -but it’ll have less to learn where it’s good, and more to learn where it’s not.*)
*Although generalization ability seems like the reason learning matters. So I see why it seems necessary for ‘transformation’.
testing latex in spoiler tag
Testing code block in spoiler tag
:::what about this:::
:::hm?
x :: Bool -> Int -> String
:::::: latex Ax+1:={} :::