Yes, it makes a lot of sense. It’s more of a method to combat already existing awkwardness, than a preventative measure. There’s no need to bring it up if you’re feeling comfortable anyway.
Hope this is appropriate for here.
I had an epiphany related to akrasia today, though it may apply generally to a problem where you are stuck: For the longest time I thought to myself: “I know what I actually need to do, I just need to sit down and start working and once I’ve started it’s much easier to keep going. I was thinking about this today and I had an imaginary conversation where I said: “I know what I need to do, I just don’t know what I need to do, so I can do what I need to do.” (I hope that makes sense). And then it hit me: I have no fucking clue what I actually need to do. It’s like I’ve been trying to empty a sinking ship of water with buckets, instead of fixing the hole in the ship.
Reminds me in hindsight of the “definition of insanity”: “The definition of insanity is doing the same thing over and over and expecting different results.”
I think I believed, that I lacked the necessary innate willpower to overcome my inner demons, instead of lacking a skill I could acquire.
This is nice. A few suggestions:
Ignore upper and lower case. Use numbers for GOTIT and GIVEUP.
Include a file or in the readme what kind of rules can actually exist. For example, without looking at the source code I don’t know whether there can be a rule like: “contains an even number of ‘a’”.
At the end, include an option to play again, or quit
I know what I actually need to do, I just need to sit down and start working and once I’ve started it’s much easier to keep going.
Let’s say, I have some homework to do. In order to finish the homework, at some point I have to sit down at my desk and start working. And in my experience, actually starting is the hardest part, because after that I have few problems with continuing to work. And the process of “sitting down, opening the relevant programs and documents and starting to work” is not difficult per se, at least physically. In a simplified form, the steps necessary to complete my homework assignment are:
Open relevant documents/books, get out pen and paper etc.
Start working and don’t stop working.
I know what I need to do, I just don’t know what I need to do, so I can do what I need to do.
Considering how much trouble I have getting to the point where I can do step one (sometimes I falter between steps one and two), there must be at least one necessary step zero before I am able to successfully complete steps one and two. And knowing steps one and two does not help very much, if I don’t know how to get to a (mental) state where I can actually complete them.
A different analogy: I know how I can create a checkmate if I only have a rook and king, and my opponent only a king. But that doesn’t help me if I don’t know how to get to the point where only those pieces are left on the board.
Would it maybe help, if you left some of the details vague at first, to get back into writing, and go back later to rewrite those parts?
See also this.
I would be interested to see the results of some Clustering Algorithm on the comment data. It may be, that long comments can be classified into high karma and low karma and we can then analyze what the differences between them are. If it is possible to extract features of high-quality posts, then those features can be the goal, instead of just the length.
I also think it’s dangerous to focus too strongly on karma, because karma score is only a rough approximation of actual quality. For example, I believe many short comments, that only ask for some clarification are generally more important than is reflected by their karma.
I don’t know how it is for others, but personally, I am much more likely to read a full text if it’s posted here directly, than if there’s just a link.
Your sibling may or may not be interested in participating in Google Summer of Code, though the pay may be too little and I’ve heard (but not confirmed) that only around 10% of all applicants are taken.
This raises the question: Is it possible to deduce the correct person without creating conscious simulations of possibly very many people, which raises ethical questions.
I view it from a practical viewpoint: Even if you believe the Buddhist view, that the self is an illusion etc. you still feel like you have a self for >95% of the time (i.e. whenever you’re not meditating). When you wake up in the morning you feel like you are the same person that went to sleep the evening before. On the other hand, a clone of you would not feel like it is you anymore than one identical twin feels it is the other. So ideally people in the future should create a person/simulation that feels like it went to sleep and woke up again when it “should” have died.
Problems arise mainly when you hit something that only partially feels like it is the same person. I’d say there is still a considerable range of possible people that are sufficiently similar that we say it is the same person, since there is also considerable variation in the normal functioning of human brains.
Human memory is quite inaccurate. Different people with only slightly different memories could be said to be the same people. This may actually go quite far, if we consider the effects of Alzheimer’s disease or other forms of amnesia.
Being heavily intoxicated can to an extent feel like being a different person. Personality and habit changes over the course of your life can make you a different person, we still say it is the same person.
I wonder whether it is possible to find some sort of “core” personality/traits/memories, such that we can say as long as it remains unchanged it is the same person. I suspect there isn’t, as it seems to be a gradient instead of a binary classification.
Donating now vs. saving up for a high passive income
Is there any sort of consensus on whether it is generally better to (a) directly donate excess money you earn or (b) save money and invest it until you have a high enough passive income to be financially independent? And does the question break down to: Is the long term expected return for donated money (e.g. in terms of QALYs) higher than for invested money (donated at a later point)? If it is higher for invested money there is a general problem of when to start donating, because in theory, the longer you wait, the higher the impact of that donated money. If the expected return for invested money is higher atm, I expect there will however come a point in time where this will no longer be the case.
If the expected return is higher for immediately donated money, are there additional benefits of having a high passive income that can justify actively saving money? E.g. not needing to worry about job security too much...
Do you know about this thing? It actually gets introduced at 11:00. It’s originally intended to let deaf people hear again, but later on he shows that you can use any data as input. It’s (a) probably overkill and (b) not commercially available, but depending on how much time and resources you want to invest I imagine it shouldn’t be all too hard to make one with just 3 pads or so.
Do you continue to wear them on a regular basis? Overall, recommend it, yes or no?
What purpose would such a measure serve? And do you try to find a universal measure or one that is individual for every person? Because different people have different goals, you could try to measure how well reality aligns with their goals, but then you just select for people who can accurately predict what they can achieve.
I have a definition of success. For me, it’s very simple. It’s not about wealth and fame and power. It’s about how many shining eyes I have around me.
Trying to summarize here:
The open letter says: “If we allow autonomous weapons, a global arms race will make them much cheaper and much more easily available to terrorists, dictators etc. We want to prevent this, so we propose to outlaw autonomous weapons.”
The author of the article argues, that the technology gets developed either way and will be cheaply available, and then continues to say, that autonomous weapons would reduce casualties in war.
I suspect that most people agree, that (if used ethically) autonomous weapons reduce casualties. The actual question is, how much (more) damage can someone without qualms about ethics do with autonomous weapons, and can we implement policies to minimize the availability of autonomous weapons to people we don’t want to have them.
I think the main problem with this whole discussion was already mentioned elsewhere: Robotics and AI experts aren’t experts on politics, and don’t know what the actual effects of an autonomous weapon ban would be.
Using Prediction Book (or other prediction software) for motivation
Does anyone have experience with the effects of documenting things you need to do in PredictionBook (or something similar) and the effects it has on motivation/actually doing those things? Basically, is it possible to boost your productivity by making more optimistic predictions? I’ve been dabbling with PredictionBook and tried it with two (related) things I had to do, which did not work at all.
What does “if used ethically” mean?
I was thinking mainly along the lines of using it in regular combat vs. indiscriminately killing protesters.
Autonomous weapons should eventually be better than humans at (a) hitting targets, thus reducing combatant casualties on the side that uses them and (b) differentiating between combatants and non-combatants, thus reducing civilian casualties. This is working under the assumption, that something like a guard robot would accompany a patrolling squad. Something like a swarm of small drones, that sweep a city to find and subdue all combatants is of course a different matter.
The US is already using it’s drones in Pakistan in a way that violates many passages of international law, like shooting at people who rescue wounded people.
I wasn’t aware of this, do you have a source on that? Regardless, the number of civilian casualties from drone strikes is definitely too high, from what I know.
Good point, when I wrote down the predictions, I just used my usual unrealistically optimistic estimate of: “This is in principle doable in this time and I want to do it.”, i.e. my usual “planning” mode, without considering how often I usually fail to execute my “plans”. So in this case, I think I adjusted neither my optimism, nor my plans, I only put my estimate for success into actual numbers for the first time (and hoped that would do the trick).
Have you actually experienced this, or is this an assumption? I would have expected that saying these sorts of things would come off as a red flag for “this person is awkward/desperate” --> avoid contact.