I think you’re right that AI safety is case A but I’d suggest you add reversibility into your reasons. If we can just turn it off (ignoring influence campaigns to get us not to) there’s no risk, the problem is if it is one way. Outside of AI take over scenarios, there’s substantial evidence that some would consider ending sentience a kind of murder.
erikerikson
I see as agreement with you adding embellishment.
A realistic imagining of a counterintuitive case, extreme in the hidden direction might be useful… For example, a psychopath that lacks empathy but understands that hurting others reduces their productivity, affect, and other factors that thereby reduce their value and production for the psychopath and the society in which the psychopath is embedded. Given this understanding the psychopath pursues the betterment of their fellow person even while finding manner by which to benefit themselves.
The question eventually becomes whether the label is about the underlying attribute propensity or reacting to that propensity in a way the consensus judges to badly far from maximizing.
In response to my childhood and needing to escape bad defaults that just seemed like reality, I’ve grown up deeply asking these questions regularly but particularly earlier on with regular revisits as I learn, explore, and reprioritize. They are meta questions about the question and asking them while doing a thing can stop you from doing the thing. Doing this can open massive cans of worms that can trigger regress to first principles and undecidable value judgements. This can become recursive and lead to paralysis if used too much, without self compassion, or if paired with natural anxiety. Asking the question repeatedly about the same consideration and carefully redoing the work to get a good answer can get boring and wasteful over time so as your confidence in a path grows (assembly of paths, priorities, etc.) the value of asking reduces and so the scope about which you ask usually shifts. While you’re not asking value can shift but priorities and values are important given limited time and attention. Something not working or being as good as you believe is reasonable to expect for a small enough amount of energy to receive is a good trigger. If you aren’t asking then a meta trigger is important to consider what improvements could be made and in that same question what ladders you can extend to others for context improvement and isolation protection if not also altruism. I find a decision to be satisfied to be important such that improvements are a delightful bonus rather than an endless treadmill.
To be more specific about “regularly”, the most intense period of asking was when my consciousness, planning, and intentionality were really coming online. I had a big backlog of deeply unmet needs/desires, missing skills/habits, and my self analysis about the dysfunctionality of my embodied strategies was really coming online to point out my deficiencies. The evidence was ineffectiveness at accomplishing my goals and serving my needs. During that time I was obsessively asking at any time not otherwise distracted and deeply engaged. I’d estimate 40% of weekly waking hours with days approaching 80-90%. These were times of immense and sometimes unsettling growth. It would not have felt worth the cost if it had not been necessary, if my life and modus operandi had been working. Those early days were focused on more specific, concrete concerns, decisions, and operations (let’s bundle this as Q). As time has gone on the considerations shifted to assemblies of Qs, explorations and tests of how Qs interact, and on to structures of Q assemblies with far greater complexity and combinatoric considerations that include multi-agent game theory. Over time I would estimate that I’ve settled into a cadence of 5-10% of weekly waking hours with spikes around changes and developments in life. That said, I have tried to arrange to put profitable business behind raising that number but I have so far failed. It’s become at least a pleasurable past time that I enjoy sharing in partnership but that can be even more complex and hard to find partners for.
I suspect this built my intellect and competence which has largely made my life relatively wonderful (in broad comparison to the population but starkly so against the trajectory I had been on prior) and given me the empowerment to navigate life with an intentionality, consciousness, and skill that I do not observe often. I could recommend nothing more than getting very serious and exceedingly honest about such questions.
I appreciate that. I think it’s good to have better access to reality. What I’m saying is that I would suggest prioritizing enhanced awareness of other dimension of reality. Before establishing my monogamous marriage I’d have happily have dated someone considered ugly with deformities who nonetheless had high compatibility emotionally, intellectually, and socially. My happiness and smoothness in doing so would probably have served as one of the compatibility bars that they would have employed to evaluate my fitness. There is a large societal focus on attractiveness but I don’t expect it to be the best predictor of optimal relational outcomes. You already to want to privilege and protect focus on that attribute. Do I misunderstand?
This feels like it would really reinforce focus on the easily accessible attribute of external appearance. For long term value/return/satisfaction in relationships I would expect attributes that matter more to include emotional and social skill level; compatible social schemas or ability to negotiate and agree across social schemas; lived lifestyle adjusted to remove base context determinants; or similarity in desire to introspect, express, and connect. Ignoring privacy related fears that tend to be strongest around centralized repositories, if we would invest systemically in pairing (or more generally, grouping), I would expect is to use a more effective process than the unimaginative and intentionally anti-effective tinder-like approach?
Regarding pieces of oneself, consider the ideas of IFS (internal family systems). “Parts” can be said to attenuate to different concerns and if one can distract from others then an opportunity to maximize utility across dimensions may be missed. One might also suggest that attenuation to only one concern over time can result in a slight movement towards disintegration as a result of increasingly strong feelings about “ignored” concerns. Integration or alignment, with every part joining a cooperative council is often considered a goal and personification can assist some in more peaceably achieving that. I personally found the suggestion to personify felt weird and false.
Of course, Loqi’s suggestion could contingently be less optimal than the less easy to accept presentation.
While the approach you suggest could provide a more subjectively negative experience, the cognitive dissonance could cause the utterance to gain more attention with the brain as a more aberrant occurrence in its stimuli and as a result be worthy of further analysis and consideration.
I am generally in favor of delivering notions I believe to be helpful in a manner which can/will be accepted. In some cases however, others are able and more likely to accept a less than pleasant delivery mechanism. This is contingent upon the audience, of course, as well as the level of knowledge you have about your audience. In the absence of such knowledge, the more gentle approach seems advisable.
I’d take this differently.
I would at least hope that you are claiming that there is, in fact, a choice, whether the subjective experience of the moment provides indication of the choice or not.
Maybe stated differently you could be claiming that there is the possibility of choice for all people whether a person is aware or capable of taking advantage of that fact. That a person can alter his or her self in order to provide his or her self with the opportunity to choose in such situations.
Loqi’s feedback seems to me to be suggesting that individuals who do not have a belief that they have such a “possibility of choice” could have a more positive phenomenological experience of your assertion and as a result be more likely to integrate the belief into their own belief set and [presumably] gain advantage by encountering it.
That is me asserting that Loqi does not appear to be rejecting your assertion but only suggesting a manner by which it can be improved.
It is simply less demanding to choose a small set of ideas one supports or the contrary than to understand both and perform the even more difficult reconciliation of the differentiated concepts.
For example: the individual versus society. Individuals are by definition part of the collection of people that is a society and societies do not exist except where there are individuals. The greater utility exists where both individuals and societies are served to their greatest interests by the choices we make but it is much easier to communicate about the importance of one over the other. The falseness of the division or belief in the contention is the problem/distraction rather than the solution.
If intelligence is efficient optimization across domains then satisfying the utility of a greater set of domains requires greater intelligence. Increasing the number of sides or the complexity of the considerations and you reduce the population that can grasp or support the initiatives or arguments and as a result reduce your success as a candidate. This, of course, is the difficulty of improving beyond the current steady state.
The simplest reason to care about Friendly AI is that we are going to be coexisting with AI, and so we should want it to be something we can live with.
I’d like to suggest that it is important that the friendly AGI would hopefully also want to live with us. I’d further suggest that this is part of why efforts such as LW are important.
As wedrifid appeared to intone in the original reply, the actually discovered “there is cognitive activity present” from the given link is the key knowledge of pertinence to open the exploration of what is self.
Thanks for the further context.
I was originally impressed (and continue to be) by diegocaleiro’s open self presentation (awesome!) and hoped to merely provide, in its greatest hope, a possible sense of dependable enough structure for accelerated progression beyond pitfalls that I had previously slowed within.
While the analytical ideation is pleasant, relevant, and useful, the emotive or experiential consequences seem relevant and vital as catalyst that can either grow or inhibit the evolution we are attempting to partake in for the artifact of sentience. We can chose our preferential modes or aspects but it does not deny that our persons are more broad or that each has strengths to provide and weaknesses to avoid.
A reference to further reading I should do would be likewise appreciated.
I would merely suggest, qualitative assignments aside, that it is enough to deny nihilistic mind states that can occur as one possible result of abandoning cached selves.
I am curious if there is a link or further explanation you can provide to help me understand your objections and why you have them more easily. I’m not interested in defending Descartes or his body of work but his is one of the earliest and better known accounts I have encountered (being relatively not-well-read) of that particular strain of thought that was itself an important part of my formative years. Care to provide?
In my experience of similar appearing shifts of person, what you are experiencing is the “instability” that is to become your new (and more) “stable” state. It will provide advantages and disadvantages and is, in my finding, a more optimal but longer term strategy for the living of life.
Remember:
You exist (i.e. “you think, therefore you are”—thank you Descartes)
Item 1. above provides at least one example of an indisputable truth that you may know. As a result, truth exists whether you know what that truth is or not.
Although it may appear less stable, your newer normal provides greater stability. After all a system which can be made unstable was not stable, it only maintained such an appearance.
Don’t forget to rest and appreciate the work you have chosen for yourself. Doing so can only support your continued ability to strive further.
Regarding the concerns you have for the emerging new morality, I think you’ll find well enough over time that you come full circle. There are experientially more options before you than you previously provided yourself. However, some of those are better options than the others. In the end, given the shared nature of existence your own most selfish interests will bear relationship to the greatest selfish interests of the other sentiences in said existence. There is some trickiness in that last statement but I stand by it. As you begin to come around this “full circle” what I would suggest you’ll find is that you’ll not only approach your previous state in a sense but that it will be supported by a greater appreciation of, awareness of, and capability in how to better obtain your goals.
Enjoy the exploration of your possible person states!
I am Erik Erikson. By day I currently write patents and proofs of concept in the field of enterprise software. My chosen studies included neuro and computer sciences in pursuit of the understanding that can produce generally intelligent entities of equal to or greater than human intelligence less our human limitations. I most distinctly began my “rationalist” development around the age of ten when I came to doubt all truth, including my own existence. I am forever in debt to the “I think, therefore I am” idiom as my first piece of knowledge. I happened upon LW through singularity.org and appreciate the efforts here. Of particular interest to me is improved consideration of the formulated goal for AI (really for any sentient entity) I have devised: the manifested unification of all ideals. I pleasantly found this related to the formulation of intelligence that appears commonly accepted here: “cross-domain optimization”. However, I have also been concerned for some time about the mechanical bias that may be implicit: it seems clear that a system which functions through growth (the establishment of connections) as a result of correlated signals would be inherently and, of concern, incorrectly biased towards favoring the unification concept.
Is agriculture used for war? Without the efficiency that allowed for specialization would we have militaries? War fighters cannot fight without food, is the farmer complicit? How about the yes from which fruits were collected or the animals that were hunted?