What’s your favorite times you’ve used CAD/CNC or 3D printing? Or what’s your most likely place to make use of it?
tailcalled
Software is kind of an exceptional case because computers are made to be incredibly easy to update. So the cost of installing software can be neglected relative to the cost of making the software.
I think a lot of the participants are bottlenecked on a lack of important problems that they dare attempt to solve.
I think it would be more accurate to say that wizards transmute. “Creation” actually requires resources, so it’s not creation from nothing.
Amateur spaces are the most cost-effective way of raising the general factor of wizardry. Professional-grade work is constrained on a lot of narrower things.
I feel like this is optimizing for the general factor of wizard powers, but if you actually want:
And if one wants a cure for aging, or weekend trips to the moon, or tiny genetically-engineered dragons… then the bottleneck is wizard power, not king power.
… Then to obtain a cure for aging you’d be better off finding patients (I guess pets have a good chance of being medically analogous to humans while not facing too many regulatory hurdles), performing root cause analysis of what’s causing their age, and then trying to cure that. And that would gradually expand your knowledge of how to cure aging in more and more cases. And to obtain weekend trips to the moon, idk I guess you’d want some solar → rocket fuel conversion plant + something to mass produce heat shielding + reusable rockets? I don’t know what you need for tiny genetically engineered dragons and I kind of suspect it’s a difficulty level above the others.
Research thrives on answering important questions. However, the trouble with interpretability for AI safety is that there are no important questions getting answered. Typically the real goal is to understand the neural networks well enough to know whether they are scheming, but that’s a threefold bad idea:
You cannot make incremental progress on it; either you know whether they are scheming, or you don’t
Scheming is not the main AI danger/x-risk
Interpretability is not a significant bottleneck in detecting scheming (we don’t even have good, accessible examples of contexts where AI is applied and scheming would be a huge risk)
To solve this, people substitute various goals, e.g. predictive accuracy, under the assumption that incremental predictive accuracy is helpful. But we already have perfectly adequate ways of predicting the behavior of the neural networks, it’s called running the neural networks.
I should say I’m not questioning the assertion that he plagiarized you or reacted to your challenge/criticism with insults, over-the-top defensiveness, and vitriol. I do dispute the claim that he has a general tendency towards such reactions regardless of context and case.
It would be a compromise between two factions: people who are hit by the incomplete narrative (whether they are bad actors or not) and centrists who want to maintain authority without getting involved in controversial stuff.
Certainly it would be better if the racists weren’t selective, and there’s a case to be made that centrist authorities should put more work into getting the entire account of what’s going on, but that’s best achieved by highlighting the need for the opposing side of the story, not by attacking the racists for moving towards a more complete picture.
This seems like a cope because others could go fill in the missing narrative, so selectively saying stuff shouldn’t be a huge issue in general...?
I can buy that often people are specifically opposed to racist bigots, i.e. people who are unreasonably attached to the idea of racial group differences. The essence of being unreasonable is to not be able to be reasoned with, and being reasoned with often involves presenting specific cruxes for discussion. It seems to me that Cremieux tends to do so, and so he is not a racist bigot.
I think part of what can get him persecuted for being a racist bigot is that a lot of rationalists follow him and more-or-less endorse (or at least defend) racist stuff without being willing to present cruxes, i.e. his fans are racist bigots. It’s hard for people to distinguish a writer from their fans, and I suspect this might be best addressed by writers being more internally oriented towards their fans rather than outwards oriented.
Especially when he has clearly proven on his website that he has highly nuanced takes on various other, less controversial, topics. It reminds me of people trying to shame Scott Alexander for daring to step a little outside the Overton window himself.
It may be rational of you to interpolate the quality of one facet of someone’s behavior from other facets, or to interpolate from one social controversy to another, but it’s certainly not adversarially robust. You can’t reasonably expect people not to focus on his narrower behavior in one area.
It sounds like low decoupling
High decoupling is an attempt to enforce anti-irrationalist norms through creating dissociative disorders. It’s obviously self-defeating, and combining it with a critique of “tone policing” and taboos causing asymmetric discourse/preventing people from speaking out is brazen hypocrisy.
Common explicit definitions of “racism” tend to include people who believe in racial differences (especially in socially valued traits, especially if they believe the racial differences are innate), and such beliefs are typical treated as some of the most central evidence of racism conceivable. Objecting to the designation purely on the basis that it is highly derogatory seems intellectually dishonest to me; it would be more honest to object to the derogatory element, for instance by asserting that non-racists are inattentive/delusional/lying.
My vague hinting about rumors is supposed to just serve to make him appear in a bad light, because my defense would make him appear in a good light, and I have heard rumors, so I don’t want to one-sidedly endorse him. At the same time, calling it “rumors” shows that I don’t have it first-hand and that there’s a need for a more accurate account than I can give.
I have repeatedly challenged and criticized Cremieux and he has never reacted with insults, over-the-top defensiveness or vitriol towards me.
(I have certainly heard concerning rumors about him, and I hope those responsible for the community do due diligence in investigating them. But this post feels kind of libelous, like an attempt to assassinate someone’s character to suppress discourse about race. People who think LessOnline shouldn’t invite racists could address this concern by explaining in more detail what racism is/why it’s so terrible and why racist fallacies should be so uncomfortable that one cannot go there, instead of just something that receives a quick rebuttal.)
Bypasses the need for cutlery and plates.
(Yes, you might also eat such foods in cases where you do have cutlery and plates, but that’s downstream of their existence, not the vital reason for their existence.)
Ok, so then since one can’t make artificial general agents, it’s not so confusing that an AI-assisted human can’t solve the task. I guess it’s true though that my description needs to be amended to rule out things constrained by possibility, budget, or alignment.
This statement is pretty ambiguous. “Artificial employee” makes me think of some program that is meant to perform tasks in a semi-independent manner. It would be trivial to generate a million different prompts and then have some interface that routes stuff to these prompts in some way. You could also register it as a corporation. It would presumably be slightly less useful than your generic AI chatbot, because the cost and latency would be slightly higher than if you didn’t set up the chatbot in this way. But only slightly.
Though one could argue that since AI chatbots lack agency, they don’t count as artificial employees. But then is there anything that counts? Like at some point it just seems like a confused goal to me.
Counterpoint: https://www.lesswrong.com/s/gEvTvhr8hNRrdHC62