I try to practice independent reasoning/critical thinking, to challenge current solutions to be more considerate/complete. I do not reply to DMs for non-personal (with respect to the user who reached out directly) discussions, and will post here instead with reference to the user and my reply.
ZY
This is about basic human dignity and respect to other humans, and has nothing to do with politics.
Oxford languages (or really just after googling) says “rational” is “based on or in accordance with reason or logic.”
I think there are a lot of other types of definitions (I think lesswrong mentioned it is related to the process of finding truth). For me, first of all it is useful to break this down into two parts: 1) observation and information analysis, and 2) decision making.
For 1): Truth, but also particularly causality finding. (Very close to the first one you bolded, and I somehow feel many other ones are just derived from this one. I added causality because many true observations are not really causality).
For 2): My controversial opinion is everyone are probably/usually “rationalists”—just sometimes the reasonings are conscious, and other times they are sub/un-conscious. These reasonings are unique to each person. It would be dangerous in my opinion if someone try to practice “rationality” based on external reasonings, or reasonings that are only recognized by the person’s conscious mind. I think a useful practice is to 1. notice what one intuitively want to do vs. what one think they should do (or multiple options they are considering), 2. ask why there is the discrepancy, 3. at least surface the unconscious reasoning, and 4. weigh things (the potential reasonings that leads to conflicting results) out.
From my perspective—would say it’s 7 and 9.
For 7: One AI risk controversy is we do not know/see existing model that pose that risk yet. But there might be models that the frontier companies such as Google may be developing privately, and Hinton maybe saw more there.
For 9: Expert opinions are important and adds credibility generally as the question of how/why AI risks can emerge is by root highly technical. It is important to understand the fundamentals of the learning algorithms.
Lastly for 10: I do agree it is important to listen to multiple sides as experts do not agree among themselves sometimes. It may be interesting to analyze the background of the speaker to understand their perspectives. Hinton seems to have more background in cognitive science comparing with LeCun who seems to me to be more strictly computer science (but I could be wrong). Not very sure but my guess is these may effect how they view problems. (Only saying they could result in different views, but not commenting on which one is better or worse. This is relatively unhelpful for a person to make decisions on who they want to align more with.)
(Like the answer on declarative vs procedural). Additionally, reflecting on practicing Hanon for piano (which is almost a pure finger strength/flexibility type of practice) - might be also for physical muscle development and control.
Agree with a lot of the things in this post, including “But implicit in that is the assumption that all DALYs are equal, or that disability or health effects are the only factors that we need to adjust for while assessing the value of a life year. However, If DALYs vary significantly in quality (as I’ll argue and GiveWell acknowledges we have substantial evidence for), then simply minimizing the cost of buying a DALY risks adverse selection. ”
Had the same question/thoughts when I did the Introduction to Effective Altruism course as well.
I came across this post recently, and want to really appreciate you speaking up, fighting for your rights, and raising awareness. Our society does not have proper and successful education on what consent is, and even though at my university we have consent courses during first week of school, people don’t take them seriously. Maybe they should have added a formal test on that, and you cannot enter school unless you pass. This should apply to high school as well. A 2018 article have pointed out how to teach consent at all education stages. https://www.gse.harvard.edu/ideas/usable-knowledge/18/12/consent-every-age
Many sexual assaults happen between partners, and your experience is clearly non-consensual and therefore sexual assault. I am a bit hesitate to comment as I am afraid to reopen any wounds, but also want to show support and gratitude🙏.
I am guessing maybe it is the definition of “alignment” that people don’t agree on/mixed on?
Some possible definitions I have seen:
(X risks) and/or (catastrophic risks) and/or (current safety risks)
Any of above + general capabilities (an example I saw is “how do you get the AI systems that we’re training to optimize the thing that we actually want to optimize” from https://arize.com/blog/openai-on-rlhf/)
And maybe some people don’t think it got to solving X risks yet if they view the definition of alignment as X risks only.
My guess is:
AI pause: no observation on what safety issue to address, work on capabilities anyways, then may lead to only capability improvements. (Assumption is that AI pausing means no releasing of models.)
RSP: observed O, shift more resources to work on mitigating O and less on capabilities, and when protection P is done, publish the model, then shift back to capabilities. (Ideally.)
Nice post pointing out this! Relatedly for misused/overloaded terms—I think I have seen this getting more common recently (including overloaded terms that means something else in the wider academic community or society; and self-reflecting—I sometimes do this too and need to improve on this).
I like the idea and direction of text watermarks, and more research could be done to see how to feasibly do this as adding watermarks to text seems to be much harder than images.
Maybe already mentioned in the article but I missed—have you done any analysis on how these methods effect the semantics/word choice availability from the author’s perspective?
How did you translated the dataset, and what is the translation quality?
I am relatively new to the community, and was excited to join and learn more about the actual methods to address AI risks, and how to think scientifically generally.
However after using for a while, I am a bit disappointed. I realized I probably had to filter many things here.
Good:
There are good discussions and previous summaries that are actually useful on alignment. There are people who work on these things from both ngos and industry showing what research they are doing, or what actions they have taken on safety. Similarly with bioweapons etc.
I also like articles like trying to identify how to find what’s the best thing to do with intersections of passion, skill, and importance.
I like the articles that mention/promotes contacting reality.
Bad:
Sometimes I feel the atmosphere is “edgy”, and sometimes I see people may argue over relatively small things that I don’t know how the conclusion will lead to actual actions. And maybe this is just the culture here, but I found it surprising how easy people call each other “wrong” although many times I felt like both sides are just opinions. And I felt like I see less “I think” or “in my opinion” to quality claims than usual workplace at least. People appear to be very confident or sure about their own belief when communicating. From my understanding, I think people may practicing the “strong opinion weakly hold” thinking they could say something strong and change easily—I found that to be easier in verbal communication among colleagues, schoolmates or friends where one can talk to them (a relatively small group) every day. But on a platform where there are a lot more people, and tracking on opinions changes is hard, it might be more productive to consider modifying the “strong” opinion part and quality more in the first place.
I do think the downvote or upvote, which is related to how much you can comment or contribute to the site (or if you can block a person or not), encourage group think and you would need to be identify with the sentiment of the majority (I think another answer mentioned group think as well).
I am feeling many articles or comments are quite personal/non-professional (communication feels different from what I encounter at work), which makes this community a bit confusing mixing personal and professional opinions/sharing. I think it would be nice to have a professional section, and a separate personal sections, and also encourage different communication rules for more communication efficiency, and I guess could naturally filter some articles for people at certain times wanting to focus on different things. Could be good to organize articles better by section as well? Though there is “tags” currently.
This is a personal belief, but I am a bit biased for action and hope to see more discussions on how to execute things, or at least how should actions change based on a proposed belief change.
This might be something more fundamental that is based on personal belief vs (some but not everyone on) lesswrong belief—to a certain extend I appreciate prioritization, but when it is too extreme I feel it is 1) counterproductive on solving issue itself, 2) too extreme that discourages new comers that also want to work on shared issues. It also feels more fear driven rather than rationality driven, which is discrediting in my opinion.
For 1, Many areas to work on sometimes are interrelated, and focusing only on one may not actually achieve the goal.
For 2, Sometimes it just feels alarming/scary when I see people trying to say “do not let other issues we need to solve get in the way of {AI risks/some particular thing}”.
I am sensing (though I am still kinda new here, so I might not have dig enough through articles) is that we may lack some social science connections/backgrounds and how the world actually works, even when talking about society related things (I forgot what specific articles gave me this feeling, maybe related to AI governance.)
I think for now, I probably will continue using but with many many filters.
I am not sure if that is a good reasoning, though I am also looking for reasoning/justification. The reasoning here seems to say—animals cannot talk our languages, and so it is okay to assume they do not want to survive (this is assuming the existence of humans naturally has conflict with the other specifies).
The reasoning I think I am trying to accept is that by nature it seems we want to self-preserve, and maybe unfortunately many altruism we want to do have non-altruism roots, which maybe be fine/unavoidable. Maybe it would be good to also consider the earth as a whole/expanding moral circles (when we can), and less exhibiting human arrogance. Execution wise, this may be super hard, but I think “thinking” wise, there is value in recognizing this aspect.
I don’t think that is only the viewpoint of the dead (it also seems very individually focused/personal rather than collective specifies experiment/exploration focused). This is about thinking critically and from different perspectives for truth finding, which is related to definition of rationality on lesswrong (the process of seeking truth).
I am operating on the assumption that many of us seek true altruism on this platform. I could move this to the effective altruism platform.
This sounds individually (which is still an option), but the question is about collectively.
The question is zooming out from the humanity itself, and view from out-of-human kind of angle. I think it is an interesting angle, and remind ourselves that many things we do, may not be as altruistic as we thought.
Also, I think maybe that would mean suffering risks would need more attention.
Ultimately, my answer to this might be—morally humans do not need to last forever, but we are self preservation focused, and that is okay to pursue and practice altruism whenever we can either individually or collectively; but when there is conflict against our preservation, how to pursue this “without significantly harming others” is tricky,
[Question] Non-human centric view of existence
I do not think they meant anything AI specific, just general existence about humanity vs other species.
The question was not about whether humanity live forever, the original prompt is “why must human/humanity live/continue forever?”, which is in the original question.
Do not feel the need to reply in anyway, nothing here is urgent. (Not sure why you would reply in bad shape or mention that; initially thought it is related to the topic).
I am not sure if this is the point or the focus of the topic as it is irrelevant to the question. [private]
Also curious—why are you interested in knowing how this question came up? Is that helpful to your answer of this question, and how would it change the answer? Curious to see how your answers would change depending on different ways this question may arise.
Was talking with a friend about AI risks on overpowering humanity, and then death vs suffering
I have been having some similar thoughts on the main points here for a while and thanks for this.
I guess to me what needs attention is when people do things along the lines of “benefit themselves and harm other people”. That harm has a pretty strict definition, though I know we may always be able to give borderline examples. This definitely includes the abuse of power in our current society and culture, and any current risks etc. (For example, if we are constraining to just AI with warning on content, https://www.iwf.org.uk/media/q4zll2ya/iwf-ai-csam-report_public-oct23v1.pdf. And this is very sad to see.) On the other hand, with regards to climate change (can also be current too) or AI risks, it probably should also be concerned when corporates or developers neglect known risks or pursue science/development irresponsibly. I think it is not wrong to work on these, but I just don’t believe in “do not solve the other current risks and only work on future risks.”
On some comments that were saying our society is “getting better”—sure, but the baseline is a very low bar (slavery for example). There are still many, many, many examples in different societies of how things are still very systematically messed up.