This site is a cognitohazard. Use at your own risk.
PhilosophicalSoul
Got it, will do. Thanks.
And yes, I am building up to it haha. The Solakios technique is my own contribution to the discourse and will come about in [Part V]. I’m trying to explain how I got there before just giving the answer away. I think if people see my thought process, and the research behind it, they’ll be more convinced by the conclusion.
I think when dealing with something like ‘photographic memory’ which is a highly sought after skill, but has not actually been taught (those ‘self-help guru’s’ have poisoned the idea of it) you have to be systematic. People are more than justified in being critical of these posts until I’ve justified how I got there.
“—but if one hundred thousand [normies] can turn up, to show their support for the [rationalist] community, why can’t you?”
I said wearily, “Because every time I hear the word community, I know I’m being manipulated. If there is such a thing as the [rationalist] community, I’m certainly not a part of it. As it happens, I don’t want to spend my life watching [rationalist and effective altruist] television channels, using [rationalist and effective altruist] news systems … or going to [rationalist and effective altruist] street parades. It’s all so … proprietary. You’d think there was a multinational corporation who had the franchise rights on [truth and goodness]. And if you don’t market the product their way, you’re some kind of second-class, inferior, bootleg, unauthorized [nerd].”
—”Cocoon” by Greg Egan (paraphrased)[1]
I don’t think this applies to rationalism. it’s not an ideology, or an ethical theory. Rationalism (at least to me as an outside party to all this drama) is exigent to people’s beliefs, and this community is just refining how to describe, and use better objective principles of reality. Edit: I agree with the general idea that psychospheres and the words related to them can act as meaningful keys of meaning, even in rationalist circles. Respect to Zack in this case.
Aside, I also think you’ve suffered what I call the aesthetic death. Too much to explain in a comment section. However, I’ll briefly say; it’s getting yourself wound up in a narrative psychosphere in which you serve archetypes like ‘hero’ and ‘martyr’. I think this serves a purpose when it comes to achieving some greater goal, and helping you with morale.I do not think this post serves some greater goal (if it does, like many others in this comment section, I am confused.)this bit’s been retracted after reading the below comment.
Thank you so much for this explanation. Through this lens, this post makes a lot more sense; a meaningful aesthetic death then.
This is probably one of the most important articles in the modern era. Unbelievable how little engagement it’s gotten.
I think (and you wouldn’t be the first to do it, so this isn’t personal) you have a very primitive understanding of theism. Dawkin’s arguments against God were blissful child-like ignorance at best, and wilful egoism at worst. They could each be easily rebutted and set aside on rational grounds. I struggle to follow alongside this essay when its launching pad is built upon sand.
The suffering and evil present in the world has no bearing on God’s existence. I’ve always failed to buy into that idea. Sure, it sucks. But it has no bearing on the metaphysical reality of a God. If God does not save children—yikes I guess? What difference does it make? A creator as powerful as has been hypothesised can do whatever he wants; any arguments from rationalism be damned.
I also find that this essay drips with a sort of condescension. Like, it’s almost as if you’re telling a coming-of-age story in which people emerge as perfect rationalists once they ‘overcome’ the ‘big bad belief’ that is the gauntlet of religion. I find that notion to be utterly ridiculous.
I’m not trying to get into a religious debate here; your tone seems to be that your mind is made up about that. I am good faith curious though on the reasons for your belief. Without that, I can’t read past the Yin and Yang bit in detail.
In respect to the rest of your post, I’ll reference Open Source AI Spirits, Rituals, and Practices (noduslabs.com) which covers a lot of what you talk about already. Bodymind Operating Systems | HackerNoon led by a guy named Dmitry Paranyushkin explores a lot of your talking points quite extensively.
I’d say that’s because they aren’t specifically asked. High performers tend to naturally have photographic memories, and so it’s unnatural to conceive of anything else.
The high performers I’ve spoken to didn’t realise they had photographic memories until I pointed it out. One trick to test it, is talk to them and ask them something from long ago. Sometimes, their eyes will move right to left because they’re reading a picture in their mind.
True! Hence why I’m creating this guide; and I don’t critique people for doubting it’s outcome.
Wow!
Thanks for picking that up, I was in a rush when footnoting. Heinlein’s Gulf is what I intended to place there.
Thanks for those links, I hadn’t even heard of Renshaw. I’ll be editing it into the above.
Good points. I’ll try cover some of this in my final post. I unfortunately haven’t tested this outside of my field, so it’ll be difficult. But I assure you, I will try.
Will do, thanks for the advice.
I would be very interested to see a broader version of this post that incorporates what I think to be the solution to this sort of hivemind thinking (Modern Heresies by @rogersbacon) and the way in which this is engineered generally (covered by AI Safety is dropping the ball on clown attacks by @trevor). Let me know if that’s not your interest; I’d be happy to write it.
Scavenger’s Reign comes to mind for this post.
So they rationalize, they explain. They can tell you why they had to crack a safe or be quick on the trigger finger. Most of them attempt by a form of reasoning, fallacious or logical, to justify their antisocial acts even to themselves, consequently stoutly maintaining that they should never have been imprisoned at all.”
In some cases, maybe. What about Ted Kaczynski? Still fallacious? What about Edward Snowden?
I think this post points out a more underlying issue, maybe several. ‘Criminals’ believe what they believe because of their genetics, their worldview, their upbringing and so forth. To them, they cannot conceive of our realities. And so yes, it makes sense that to them they are the heroes. Perhaps, they even have good reasons for it.
How can we with our own parameters judge criminals if we haven’t experienced the life that made them believe so? How does a criminal explain himself if his world is compared by the physics of another world he’s never lived in? Is a criminal simply as Camus describes in “The Outsider” he who does not conform with status quo?
I’m sceptical that the appreciation needs to be sincere. In a world full of fakes, social media, etc. I think people don’t really deep whether something is fake. They’re happy to ‘win’ with accepting a statement or compliment as real, even if it’s just polite or part of corporate speak.
Even more concerning, is that if you don’t meet this insanely high threshold now of: ‘Compliment everyone, or stay quiet.’ you’re interpreted as cold, harsh or critical. In reality, you’re just being truthful and realistic with how you hand out appreciation.
Unfortunately, there are two significant barriers to using tort liability to internalize AI risk. First, under existing doctrine, plaintiffs harmed by AI systems would have to prove that the companies that trained or deployed the system failed to exercise reasonable care. This is likely to be extremely difficult to prove since it would require the plaintiff to identify some reasonable course of action that would have prevented the injury. Importantly, under current law, simply not building or deploying the AI systems does not qualify as such a reasonable precaution.
Not only this, but it will require extremely expensive discovery procedures which the average citizen cannot afford. This is assuming you can overcome the technical barrier of; but what specifically in our files are you looking for? what about our privacy?
Second, under plausible assumptions, most of the expected harm caused by AI systems is likely to come in scenarios where enforcing a damages award is not practically feasible. Obviously, no lawsuit can be brought after human extinction or enslavement by misaligned AI. But even in much less extreme catastrophes where humans remain alive and in control with a functioning legal system, the harm may simply be so large in financial terms that it would bankrupt the companies responsible and no plausible insurance policy could cover the damages.I think joint & several liability regimes will resolve this. In the sense that, it’s not 100% the companies fault; it’ll be shared by the programmers, the operator, and the company.
Courts could, if they are persuaded of the dangers associated with advanced AI systems, treat training and deploying AI systems with unpredictable and uncontrollable properties as an abnormally dangerous activity that falls under this doctrine.
Unfortunately, in practice, what will really happen is that ‘expert AI professional’ will be hired to advise old legal professionals what’s considered ‘foreseeable’. This is susceptible to the same corruption, favouritism and ignorance we see in usual crimes. I think ultimately, we’ll need lawyers to specialise in both AI and law to really solve this.
The second problem of practically non-compensable harms is a bit more difficult to overcome. But tort law does have a tool that can be repurposed to handle it: punitive damages. Punitive damages impose liability on top of the compensatory damages the plaintiffs in successful lawsuits get to compensate them for the harm the defendant caused them.
Yes. Here I ask: what about legal systems that use delictual law instead of tort law? The names, requirements and such are different. In other words, you’ll get completely different legal treatment for international AI’s. This creates a whole new can of worms that defeats legal certainty and the rule of law.
I found this post meaningful, thank you for posting.
I don’t think it’s productive to comment on whether the game is rational, or whether it’s a good mechanism for AI safety until I myself have tried it with an equally intelligent counterpart.
Thank you.
Edit: I suspect that the reason why the AI Box experiment tends to have many of the AI players winning is exactly because of the ego of the Gatekeeper in always thinking that there’s no way I could be convinced.
That last bit is particularly important methinks.
If a game is began with the notion that it’ll be posted online, one of two things, or both will happen. Either (a) the AI is constrained by the techniques they can implore, unwilling to embarrass themselves or the gatekeeper to a public audience (especially when it comes down to personal details.), or (b) the Gatekeeper now has a HUGE incentive not to let the AI out; to avoid being known as the sucker who let the AI out...
Even if you could solve this by changing details and anonymising, it seems to me that the techniques are so personal and specific that changing them in any way would make the entire dialogue make even less sense.
The only other solution is to have a third-party monitor the game and post it without consent (which is obviously unethical, but probably the only real way you could get a truly authentic transcript.)
At the moment, I just don’t see the incentive of doing something like this. I was hoping to make it more efficient through community feedback; see if my technique givesonly mea photographic memory etc. Mnemonics is just not something that interests LW at the moment, I guess.Additionally, my previous two (2) posts were stolen by a few AI Youtubers. I’d prefer the technique I revealed in this third post not to be stolen too.
I’m pursuing sample data elsewhere in the meantime to test efficacy.My work seems to have been spread across the internet regardless, oh well. As a result, I’ve restored the previous version.
This was so meta and new to me I almost thought this was a legitimately real competition. I had to do some research before I realised ‘qualia splintering’ is a made up term.
Could we say then that the Second Foundation in Isaac Asimov’s Foundation series is a good example of Level 4? And an example of Level 5 might be Paul Atreides and the ability of precognition?