Don’t use this site. People here will punish you for asking questions.
PhilosophicalSoul
In my opinion, a class action filed by all employees allegedly prejudiced (I say allegedly here, reserving the right to change ‘prejudiced’ in the event that new information arises) by the NDAs and gag orders would be very effective.
Were they to seek termination of these agreements on the basis of public interest in an arbitral tribunal, rather than a court or internal bargaining, the ex-employees are far more likely to get compensation. The litigation costs of legal practitioners there also tend to be far less.
Again, this assumes that the agreements they signed didn’t also waive the right to class action arbitration. If OpenAI does have agreements this cumbersome, I am worried about the ethics of everything else they are pursuing.
For further context, see:
This is probably one of the most important articles in the modern era. Unbelievable how little engagement it’s gotten.
I would be very interested to see a broader version of this post that incorporates what I think to be the solution to this sort of hivemind thinking (Modern Heresies by @rogersbacon) and the way in which this is engineered generally (covered by AI Safety is dropping the ball on clown attacks by @trevor). Let me know if that’s not your interest; I’d be happy to write it.
This was so meta and new to me I almost thought this was a legitimately real competition. I had to do some research before I realised ‘qualia splintering’ is a made up term.
I have reviewed his post. Two (2) things to note:
(1) Invalidity of the NDA does not guarantee William will be compensated after the trial. Even if he is, his job prospects may be hurt long-term.
(2) State’s have different laws on whether the NLRA trumps internal company memorandums. More importantly, labour disputes are traditionally solved through internal bargaining. Presumably, the collective bargaining ‘hand-off’ involving NDA’s and gag-orders at this level will waive subsequent litigation in district courts. The precedent Habryka offered refers to hostile severance agreements only, not the waiving of the dispute mechanism itself.
I honestly wish I could use this dialogue as a discrete communication to William on a way out, assuming he needs help, but I re-affirm my previous worries on the costs.
I also add here, rather cautiously, that there are solutions. However, it would depend on whether William was an independent contractor, how long he worked there, whether it actually involved a trade secret (as others have mentioned) and so on. The whole reason NDA’s tend to be so effective is because they obfuscate the material needed to even know or be aware of what remedies are available.
- 17 May 2024 8:18 UTC; 14 points) 's comment on Ilya Sutskever and Jan Leike resign from OpenAI [updated] by (
MMASoul this competition is real. You’ve already undergone several instances of qualia splintering. I guess we’ll have to start over sigh.
This is test #42, new sample of MMAvocado. Alrighty, this is it.
MMASoul: has a unique form of schizosyn; a 2044 phenomenon in which synaesthesia and schizophrenia have combined in the subject due to intense exposure to gamma rays and an unhealthy amount of looped F.R.I.E.N.D.S episodes. In this particular iteration, MMASoul believes it is “reacting” to a made-up competition instead of a real one. Noticeably, MMASoul had their eyes closed the entire time, instead reading braille from the typed keys.
Some members of our STEM Club here at the University think this can generate entirely unique samples of MMAvocado, which will be shared freely among other contestants. Further, we shall put MMASoul to work in making submissions of what MMAvocado would have created if he had actually entered this competition.
PS: MMASoul #40 clicked on the ‘Lena’ link and had to be reset and restrained due to mild psychosis.
I’ve been using nootropics for a very long time. A couple things I’ve noticed:
1) There’s little to no patient-focused research that is insightful. As in, the research papers written on nootropics are written from an outside perspective by a disinterested grad student. In my experience, the descriptions used, symptoms described, and periods allocated are completely incorrect;
2) If you don’t actually have ADHD, the side-effects are far worse. Especially long-term usage. In my personal experience, those who use it without the diagnosis are more prone to (a) addiction, (b) unexpected/unforeseen side-effects, and (c) a higher chance of psychosis, or comparable symptoms;
3) There seems to be an upward curve of over-rationalising ordinary symptoms the longer you use nootropics. Of course, with nootropics you’re inclined to read more, and do things that will naturally increase your IQ and neuroplasticity. As a consequence, you’ll begin to overthink whether the drugs you’re taking are good for you or not. You’ll doubt your abilities more and be sceptical as to where your ‘natural aptitude’ ends, and your ‘drug-heightened aptitude’ begins.
Bottomline is: if you’re going to start doing them, be very, very meticulous in writing down each day in a journal. Everything you thought, experienced and did. Avoid nootropics if you don’t have ADHD.
The quote’s from Plato, Phaedrus, page 275, for anyone wondering.
Great quote.
I found this post meaningful, thank you for posting.
I don’t think it’s productive to comment on whether the game is rational, or whether it’s a good mechanism for AI safety until I myself have tried it with an equally intelligent counterpart.
Thank you.
Edit: I suspect that the reason why the AI Box experiment tends to have many of the AI players winning is exactly because of the ego of the Gatekeeper in always thinking that there’s no way I could be convinced.
Got it, will do. Thanks.
And yes, I am building up to it haha. The Solakios technique is my own contribution to the discourse and will come about in [Part V]. I’m trying to explain how I got there before just giving the answer away. I think if people see my thought process, and the research behind it, they’ll be more convinced by the conclusion.
I think when dealing with something like ‘photographic memory’ which is a highly sought after skill, but has not actually been taught (those ‘self-help guru’s’ have poisoned the idea of it) you have to be systematic. People are more than justified in being critical of these posts until I’ve justified how I got there.
Could we say then that the Second Foundation in Isaac Asimov’s Foundation series is a good example of Level 4? And an example of Level 5 might be Paul Atreides and the ability of precognition?
I’m so happy you made this post.
I only have two (2) gripes. I say this as someone who 1) practices/believes in determinism, and 2) has interacted with journalists on numerous occasions with a pretty strict policy on honesty.1. “Deep honesty is not a property of a person that you need to adopt wholesale. It’s something you can do more or less of, at different times, in different domains.”
I would disagree. In my view, ‘deep honesty’ excludes dishonesty by omission. You’re either truthful all of the time or you’re manipulative some of the time. There can’t be both.
2. “Fortunately, although deep honesty has been described here as some kind of intuitive act of faith, it is still just an action you can take with consequences you can observe.
Not always. If everyone else around you goes the mountain of deceit approach, your options are limited. The ‘rewards’ available for omissions are far less, and if you want to have a reasonably productive work environment, at least someone has to tell the truth unequivocally. Further, the ‘consequences’ are not always immediately observable when you’re dealing with practiced liars. The consequences can come in the form of revenge months, or, even years later.
Do you think there’s something to be said about an LLM feedback vortex? As in, teacher’s using ai’s to check student’s work who also submitted work created by AI. Or, judges in law using AI’s to filter through counsel’s arguments which were also written by AI?
I feel like your recommendations could be paired nicely with some in-house training videos, and external regulations that limit the degree / percentage involvement of AI’s. Some kind of threshold or ‘person limit’ like elevators have. How could we measure the ‘presence’ of LLM’s across the board in any given scenario?
I didn’t get that impression at all from ‘...for every point of IQ gained upon retaking the tests...’ but each to their own interpretation, I guess.
I just don’t see the feasibility in accounting for a practice effect when retaking the IQ test is also directly linked to the increased score you’re bound to get.
Alignment researchers are the youngest child, and programmers/Open AI computer scientists are the eldest child. Law students/lawyers are the middle child, pretty simple.
It doesn’t matter whether you use 10,000 students, or 100, the percentage being embarrassingly small remains the same. I’ve simply used the categorisation to illustrate quickly to non-lawyers what the general environment looks like currently.
“golden children” is a parody of the Golden Circle, a running joke that you need to be perfect, God’s gift to earth sort of perfect, to get into a Big 5 law firm in the UK.
I used ‘Altman’ since he’ll likely be known as the pioneer who started it. I highly doubt he’ll be the Architect behind the dystopian future I prophesise.
In respect of the second, I simply don’t believe that to be the case.
The third is inevitable, yes.
I would hope that ‘no repair’ laws, and equal access to CPU chips will come about. I don’t think that this will happen though. The demands of the monopoly/technocracy will outweigh the demands of the majority.
Sure. I think in an Eliezer reality what we’ll get is more of a ship pushed onto the ocean scenario. As in, Sam Altman or whoever is leading the AI front at the time, will launch an AI/LLM filled with some of what I’ve hinted at. Once it’s out on the ocean though, the AI will do it’s own thing. In the interim before it learns to do that though, I think there will be space for manipulation.
Amazing question.
I think common sense would suggest that these toddlers at least have a chance later in life to grow human connections; therapy, personal development etc. The negative effect on their social skills, empathy, and the reduction in grey matter can be repaired.
This is different in the sense that the cause of the issues will be less obvious and far more prolonged.
I imagine a dystopia in which the technocrats are puppets manoeuvring the influence AI has. From the buildings we see, to the things we hear; all by design and not voluntarily elected to.
In contrast, technocrats will nurture technocrats—the cycle goes on. This is comparable to the TikTok CEO commenting that he doesn’t let his children use TikTok (among other reasons, I know).
At the moment, I just don’t see the incentive of doing something like this. I was hoping to make it more efficient through community feedback; see if my technique givesonly mea photographic memory etc. Mnemonics is just not something that interests LW at the moment, I guess.Additionally, my previous two (2) posts were stolen by a few AI Youtubers. I’d prefer the technique I revealed in this third post not to be stolen too.
I’m pursuing sample data elsewhere in the meantime to test efficacy.My work seems to have been spread across the internet regardless, oh well. As a result, I’ve restored the previous version.
I am a lawyer.
I think one key point that is missing is this: regardless of whether the NDA and the subsequent gag order is legitimate or not; William would still have to spend thousands of dollars on a court case to rescue his rights. This sort of strong-arm litigation has become very common in the modern era. It’s also just… very stressful. If you’ve just resigned from a company you probably used to love, you likely don’t want to fish all of your old friends, bosses and colleagues into a court case.
Edit: also, if William left for reasons involving AGI safety—maybe entering into (what would likely be a very public) court case would be counteractive to their reason for leaving? You probably don’t want to alarm the public by flavouring existential threats in legal jargon. American judges have the annoying tendency to valorise themselves as celebrities when confronting AI (see Musk v Open AI).