Is this your first time running into Zack’s stuff? You sound like you’re talking to someone showing up out of nowhere with a no-context crackpot manuscript and has zero engagement with community. Zack’s post is about his actual engagement with the community over a decade, we’ve seen a bunch of the previous engagement (in pretty much the register we see here so this doesn’t look like an ongoing psychotic break), he’s responsive to comments and his thesis generally makes sense. This isn’t drive-by crackpottery and it’s on LessWrong because it’s about LessWrong.
rsaarelm
In a Facebook post I argued that it’s fair to view these things as alive.
Just a note, unlike in the recent past, Facebook post links seem to now be completely hidden unless you are logged into Facebook when opening them, so they are basically broken as any sort of publicly viewable resource.
How much have you interacted with strangers on anything intellectual in your life so far? You come off as not really realizing yet that communities have different communication styles and expectations and that you need to understand and learn the local customs before you’ll get a good reception.
For example, if you are getting downvoted a lot and don’t know why, you might for example make a comment on an open thread saying something like “Hey guys, looks like my stuff is getting downvoted a lot and I’m not sure why, can you tell me what I’m doing wrong”. You should probably not start by proposing changes to the fundamental workings of the forum.
- 17 Sep 2022 13:18 UTC; 3 points) 's comment on Open & Welcome Thread—Aug/Sep 2022 by (
John McCarthy’s The Doctor’s Dilemma
This sounds drastic enough that it makes me wonder, since the claimed reason was that Said’s commenting style was driving high-quality contributors away from the site, do you have a plan to follow up and see if there is any sort of measurable increase in comment quality, site mood or good contributors becoming more active moving forward?
Also, is this thing an experiment with a set duration, or a permanent measure? If it’s permanent, it has a very rubber room vibe to it, where you don’t outright ban someone but continually humiliate them if they keep coming by and wish they’ll eventually get the hint.
Can second the not-driving-a-car commute thing. A long commute by bus I used to have amounted to 5 km of walking going to and from the bus stops, with optional podcast listening, and an hour of focused book-reading time every day. It made a big extra dent in my schedule, but walking and book-reading are both things I’d want to be doing regularly in any case.
With the urgency entailed by extinction risks etc., “just chilling” during dead time can (for many of us) feel undoable.
I’ve been reading attitudes like this a lot in the existential risk prevention subculture for pretty much as long as I’ve been aware of it and it’s basically made me write the whole thing off as something that’s for people who are wired very differently than I am, or just generally a bad scene.
I think my reaction is some mix of a starting intuition of this always-on 24⁄7 thing being unsustainable for pretty much everyone, but a thing that’s being postured about a lot, and that the people who fully buy into it and go ahead to burn out and get chronic depression for their trouble are selected for having neither personal understanding nor live cultural folkways informing them what long-term sustainable ways of life look like. It then sounds like a bad idea to get involved with something where your social environment will consist of people like this.
I feel like GEB has been diminished a bit by its own success. People reading it nowadays might go “what’s the big deal?” A big theme is how the mind can be a machine and still do stupid stuff, which had to be spelled out in the 70s but has pretty much permeated the relevant subcultures these days. And of course Hofstadter didn’t know a clear recipe for an actual AGI, so the speculative parts on that were left at the level of intriguing handwaving.
Do you know the monads are like burritos problem? Do you have a plan for how this sequence isn’t going to end up being “mathematics is like burritos”?
I’m still buying the CT hype, so very interested to see more of this. However, I’ve been buying the hype for some 10+ years now and trying to learn CT on and off, and still can’t point to a single instance of being able to use it either to approach a problem or understand something better, so I’m pretty skeptical about this being teachable to a mathematically naive audience in a way that they can internalize much anything about it that’s both correct and usable in some practice that isn’t advanced math study.
“It can’t happen and it would also be bad if it happened” seems to be a somewhat tempting way to argue these topics. When trying to convince an audience that thinks “it probably can happen and we want to make it happen in a way that gets it right”, it seems much worse than sticking strictly to either “it can’t happen” or “we don’t know how to get it right for us if it happens”. When you switch to talking about how it would be bad, you come off as scared and lying about the part where you assert it is impossible. It has the same feel as an 18th century theologian presenting a somewhat shaky proof for the existence of God and then reminding the audience that life in a godless world would be unbearably horrible, in the hope that this might make them less likely to start poking holes into the proof.
Are people here mostly materialists?
Okay, since you seem interested in knowing why people are materialists. I think it’s the history of science up until now. The history of science has basically been a constant build-up of materialism.
We started out at prehistoric animism where everything happening except that rock you just threw at another rock was driven by an intangible spirit. The rock wasn’t since that was just you throwing it. And then people started figuring out successive compelling narratives about how more complex stuff is just rocks being thrown about. Planets being driven by angels? Nope, just gravitation and inertia. Okay, so comets don’t have comet spirits, but surely living things have spirits. Turns out no, molecular biology is a bit tricky, but it seems to still paint a (very small) rocks thrown about picture that convincingly gets you a living tree or a cat. Human minds looked unique until people started building computers. The same story is repeating again, people point human activities as proofs of the indomitable human spirit, then someone builds an AI to do it. Douglas Hofstadter was still predicting that mastering chess would have to involve encompassing the whole of human cognition in 1979 and had to eat crow in the introduction of the 20th anniversary edition of his book.
So to sum up, simple physics went from spiritual (Aristotle’s “rocks want to go down, smoke wants to go up”) to materialist, the outer space went from spiritual to materialist, biological life went from spiritual to materialist and mental acts like winning a chess game went from spiritual to materialist.
We’re now down to the hard problem of consciousness, and we’re also still missing a really comprehensive scientific picture for how you go from neurons to high-level human thought. So which way do you think this is going to go? A discovery that the spiritual world exists after all, and was hiding in the microtubules of the human brain all along, or people looking at the finished blueprint for how the brain works that explains things up to conscious thought and going “oh, so that’s how it works” and it’s all just rocks thrown about once again. So far we’ve got a perfect record of everybody clamoring for the first option and then things turning out to be the second one.
Some things are a question of common sense or common forum etiquette, not of following a specific style guide. You’re expected to have enough other-modeling ability to see what it looks like from the outside when you show up with less than a week old account, get a negative reaction with your stuff, and then move on to propose changes to site rules.
There’s a bit of a subtext here of trying to figure out whether you’re coming from a different tradition or are an internet crazy person. This forum doesn’t have much of a culture that can tell Christian intellectual tradition apart from schizophrenia, so terse comments that assume shared idiom won’t go over very well.
FWIW, I’m finding the book quite interesting and non-crazy so far. Thanks for the link.
For constructive examples of the culture gap, I’m not sure I’ve seen the way the book uses ‘spiritual’ as describing various real-world processes (sex is not spiritual but fertilization is spiritual, using antidepressants is not spiritual but recovering from depression via long-term natural cognition is spiritual) before, and that looks like some role-playing game magic system worldbuilding to me. The only scholarly use for the word I’d expect would be calling worship and prayer spiritual activities. I guess the book’s way of use comes from something like Aristotle’s teleology?
That’s the way where you try to make another adult human recognize the thing based on their own experiences, which is how we’ve gone about this since the Axial Age. Since 1970s, the second approach of how would you program an artificial intelligence to do this has been on the table. If we could manage this, it would in theory be a lot more robust statement of the case, but would also probably be much, much harder for humans to actually follow by going through the source code. I’m guessing this is what Chapman is thinking when he specifies “can be printed in a book of less than 10kg and followed consciously” for a system intended for human consumption.
Of course there’s also a landscape between the everyday language based simple but potentially confusion engendering descriptions and the full formal specification of a human-equivalent AGI. We do know that either humans work by magic or a formal specification of a human-equivalent AGI exists even when we can’t write down the book of probably more than 10 kg containing it yet. So either Chapman’s stuff hits somewhere in the landscape between the present-day reasoning writing that piggybacks on existing human cognition capabilities and the Illustrated Complete AGI Specification or it does not, but it seems like the landscape should be there anyway and getting some maps of it could be very useful.
Maybe set up something on your phone that pings you a few times each day at random times to track your mood across the day. Whenever you get a ping, write down the time, and then for example what you were doing, your subjective mood, subjective energy level and how spaced out or focused you’re feeling.
Please do not vote without an explanatory comment (votes are convenient for moderators, but are poor intellectual etiquette, sans information that would permit the “updating” of beliefs).
This post has terrible writing style, based on your posting history you’ve been here for a year, writing similarly badly styled posts, people have commented on the style, and you have neither engaged the comments nor tried to improve your writing style. Why shouldn’t people just downvote and move on at this point?
Why not fill the detergent compartment immediately after emptying the dishwasher? Then you have closed detergent slot → dirty dishes, open detergent slot → clean dishes.
There is no indication for any reason that the workings of consciousness should obey any intuitions we may have about it.
The mind is an evolved system out to do stuff efficiently, not just a completely inscrutable object of philosophical analysis. It’s likelier that the parts like sensible cognition and qualia and the subjective feeling of consciousness are coupled and need each other to work than that they were somehow intrinsically disconnected and cognition could go on as usual without subjective consciousness using anything close to the same architecture. If that were the case, we’d have the additional questions of how consciousness evolved to be a part of the system to begin with and why hasn’t it evolved out of living biological humans.
You seem to frame this as either there being advanced secret techniques, or it just being a matter of common sense and wisdom and as good as useless. Maybe there’s some initial value in just trying to name things more precisely though, and painting a target of “we don’t understand this region that has a name now nearly as well as we’d like” on them. Chapman is a former AI programmer from the 1980s, and my reading of him is that he’s basically been trying to map the poorly understood half of human rationality whose difficulty blindsided the 20th century AI programmers.
And very smart and educated people were blindsided when they got around to trying to build the first AIs. This wasn’t a question of charlatans or people lacking common sense. People really didn’t seem to break rationality apart into the rule-following (“solve this quadratic equation”) and pattern-recognition (“is that a dog?”) parts, because up until the 1940s all rule-based organizations were run solely by cheating humans who cheat and constantly apply their pattern-recognition powers to nudge just about everything going on.
So are there better people than Chapman talking about this stuff, or is there an argument why this is an uninteresting question for human organizations despite it being recognized as a central problem in AI research with things like the Moravec paradox?