What conditions must obtain for an interaction between people to constitute “coming together to work on a common goal”?
That people have a common goal, and that they come together to work on it. Ok, I’m being deliberately tautologous there, but these are ordinary English words that we all know the meanings of, put together in plain sentences. I am not seeing what is being asked by your question, or by Zack’s. Examples of the phenomenon are everywhere (as are examples of its failure).As for how to do real work as a group (an expression meaning the same as “coming together to work on a common goal”), and how much of it is going on at any particular place and time, these are non-trivial questions. They have received non-trivial quantities of answers. To consider just LW and the rationalsphere, see for example various criticisms of LessWrong as being no more than a place to idly hang out (a common purpose but a rather trifling one compared with some people’s desires for the place); MIRI; CFAR, FHI; rationalist houses; meetups; and so on. In another sphere, the book “Moral Mazes” (recently discussed here) illustrates some failures of collaboration.I do not see how the OP gives any entry into these questions, but I look forward to seeing other people’s responses to it.
People coming together to work on a common goal can typically accomplish more than if they worked separately. This is such a familiar thing that I am unclear where your perplexity lies.
Especially if human lifespan increases, there will be a strong case to keeping your values close, and not allowinga random walk until it hits an attractor.
In other words, be an attractor for your current values already. But at what age should one decide that here, at last, is where I am going to fix myself like a sea squirt on the landscape of values?
The first edition of this book was published in 2003. In 2005, Ioannidis’ paper “Why Most Published Research Findings Are False” started the reproducibility avalanche. How well have these experiments replicated? My university library only has the first edition. I can see from the Amazon preview of the second edition (2017) that the authors address this, but I can’t see enough pages to see what their response is. I understand from other sources that priming and ego-depletion have not stood up well.
The Google results are mainly about big emergencies and disasters, and institutional responses to them.
That is covered in the article. Alice should take on that cost to reduce the cost to John, demonstrating that she takes seriously the commitment she has broken rather than just scrapping it the moment it did not suit her.
IMO people should pay each other money for various acts that provide value much more often than the do.
Within a social circle, non-denominated performance of favours is the usual method, the magnitudes involved decreasing with distance, although never quite to zero. That way of doing things is the social fabric.
I do not ask money for giving a stranger in the street directions to where he wants to go.
There is a word for important problems that must be solved at once, with no time to learn how: emergencies. Learning how in advance is called emergency preparedness. Someone has mentioned first aid. On similar lines there is knowing how to handle a breakdown in the middle of nowhere, being able to fight, situational awareness, knowing how to interact with unfriendly policemen, and so on, all the way up to knowing where your towel is when Yellowstone explodes.
The Greeks didn’t have Newton’s laws, or calculus except for the method of exhaustion for calculating certain areas.
Perhaps American washing machines are so badly made that the smallest items of clothing can escape the drum and go down the drain? Either that or it’s a bug in the simulation.
Solomonoff induction is uncomputable because it requires knowing the set of all programs that compute a given set of data. If you just have two hypotheses in front of you, “Solomonoff induction” is not quite the term, as strictly understood it is a method for extrapolating a given sequence of data, rather than choosing between two programs that would generate the data seen so far. But understanding it as referring to the general idea of assigning probabilities to programs by their length, these are still uncomputable if one considers only programs that are of minimal length in their equivalence class. And when you don’t make that requirement, the concepts of algorithmic complexity have little to say about the example.
has the neural structure been destroyed, or is it sitting in the brain but not working?
Alzheimer’s destroys the brain. The only cure is never to get it.
Can anyone provide similarly concrete information for other countries? I doubt that I can invest with Vanguard or Wealthfront from the UK. The only UK institution I know of that offers index funds and whose web site doesn’t look dodgy is Legal & General. I have some savings with them but I’d like to have alternatives.
It sounds like Scott Alexander’s “Answer To Job”.
The main problem with paraconsistent logic is that it doesn’t exist. That is, there is no formalisation of it that anyone uses. Whatever non-standard logics people study, their metalanguage is always plain old mathematical logic, as foreshadowed by Aristotle, hoped for by Leibnitz, brought to fruition by Boole, Russell, and Whitehead, and embodied into computers by Turing, von Neumann, and whoever else should be mentioned in the same breath as them. There is no other game in town, except perhaps subsystems for constructive reasoning (where e.g. any proof of ∀x.∃y… can be read as a program for computing a suitable y from a given x).
The idea of Buddhist logic has always puzzled me, because I don’t recognise anything that could be called logic in those writings, i.e. methods of reasoning,. There are only recitations of various formulas like “true, not-true, neither true not not-true, both true and not-true.”
“What do I have in my pocket?” said Gollum, and Frodo knew, and said, without philosophizing on the nature of truth.
Only insofar as even man’s sinning is part of the divine plan; but though part of the divine plan, it is still sin. The social order is the divine order, each is born into the position that God has ordained, and the King rules by the grace of God. So it has been believed in some former times and places.
There is still a trace of that in our (British) coins, which have a Latin inscription meaning “[name of monarch] by the grace of God King/Queen, defender of the faith.”
At this point I reveal that I just play a statistician on the net. I don’t know how people choose from among the many methods available. Is there a statistician in the house?
I’m not sure the feeling against slavery is so uniform.
“If X would benefit from being a non-slave more than being a slave, and there were no costs to society, would it be better for X not to be a slave?”
All would agree that for X it is better, but there is always a cost: no-one then gets the use of X as a slave. History, as Thucydides observes, has consisted of the strong doing what they will, and the weak bearing what they must. The strong see this as the proper nature of things, and would scoff at the question. The weak can but impotently daydream of paradise.
As late as 1848, these lines were penned in a Christian hymn: “The rich man in his castle, the poor man at his gate, God made them high and lowly, and ordered their estate.” The verse has since fallen into disuse, which would shock people of a few centuries ago, who saw the social order as divinely ordained.
The short answer is, you can’t. Solomonoff induction is not computable, and moreover depends on your model of computation. Asymptotically, the model of computation makes no difference, but for any finite set of examples, there is (for example) a universal Turing machine with short codes for those examples built in.
Practical methods of choosing among models use completely different methods. One collection of such methods is called regularization.
Question 2 seemed clear enough to me. That’s the one about visual migraines, yes?