I think this might be the most strongly contrarian post here in a while...
dlthomas
Not all formalizations that give the same observed predictions have the same Kolmogorov complexity[.]
Is that true? I thought Kolmogorov complexity was “the length of the shortest program that produces the observations”—how can that not be a one place function of the observations?
(and there’s the whole big-endian/little-endian question).
That’s cleared up by:
I am number 25 school member, since I agree with the last and two more.
There are only 7 billion people on the planet, even if all of them gained internet access that would still be fewer than 13 billion. In this case, instead of looking at the exponential graph, consider where it needs to level off.
People are a lot more complicated than neurons, and it’s not just people that are connected to the internet—there are many devices acting autonomously with varying levels of sophistication, and both the number of people and the number of internet connected devices are increasing.
If the question is “are there points in superhuman mind-space that could be implemented on the infrastructure of the internet roughly as it exists” my guess would be, yes.
[T]here’s no selection pressure or other effect to cause people on the internet to self-organize into some sort of large brain.
This, I think, is key, and devastating. The chances that we’ve found any such point in mind-space without any means of searching are (I would guess) infinitesimal.
Maybe if everyone played a special game where you had to pretend to be a neuron and pass signals accordingly you could maybe get something like that.
Unless the game were carefully designed to simulate an existing brain (or one designed by other means) I don’t see why restricting the scope of interaction between nodes is likely to help.
My point was just that there’s a whole lot of little issues that pull in various directions if you’re striving for ideal. What is/isn’t close enough can depend very much on context. Certainly, for any particular purpose something less than that will be acceptable; how gracefully it degrades no doubt depends on context, and likely won’t be uniform across various types of difference.
One complication here is that you ideally want it to be vague in the same ways the original was vague; I am not convinced this is always possible while still having the results feel natural/idomatic.
What if we added a module that sat around and was really interested in everything going on?
They are a bit rambly in places, but they’re entertaining and interesting.
I also found this to be true.
That’s not what “realist” means in philosophy.
I didn’t interpret the original post that way. “X realist” on this site doesn’t typically mean “person whose views about X are realistic” but rather “person who believes X is a real thing.” In this case, a “race realist” would be someone who believes that there are real, significant differences between races, presumably on a genetic basis. A race anti-realist would be someone who does not believe that. Both of these are categories of positions, into which a variety of different particular viewpoints might fall.
I would also value pointers to where/how these skills can be trained.
I guess I am jumping the shark here.
I don’t think that idiom means what you think it means.
For example, if all members of Congress were to shout loudly when a particular member got up to speak, drowning out their words, would this be censorship, or just their exercise of a community vote against that person?
One thing to note is that your comment wasn’t removed; it was collapsed. It can still be viewed by anyone who clicks the expander or has their threshold set sufficiently low (with my settings, it’s expanded). There is a tension between the threat of censorship being a problem on the one hand, and the ability for a community to collectively decide what they want to talk about on the other.
The censorship issue is also diluted by the fact that 1) nothing here is binding on anyone (which is way different than your Congress example), and 2) there are plenty of other places people can discuss things, online and off. It is still somewhat relevant, of course, to the question of whether there’s an echo-chamber effect, but carefull not to pull in additional connotations with choice of words and examples.
You have an important point here, but I’m not sure it gets up to “vast majority” before it becomes relevant.
Earmarking $K for X has an effect once $K exceeds the amount of money that would have been spent on X if the $K had not been earmarked. The size of the effect still certainly depends on the difference, and may very well not be large.
Your point is that you’re forgetting about priors. This should also be Multiheaded’s point, however poorly expressed.
Our prior for “alien pranksters” is not high—the question is just how low it is compared to alternate explanations. Any reasonable priors assign vastly more probability that Multiheaded is human than… well, anything else, but even if we rejected that it would take a while before we got to aliens. The question of whether aliens or the supernatural is to be assigned higher probability when faced with something as striking as apparent manipulation of the physical constants underlying this universe is a much harder question.
I can’t simply agree because others think differently.
Actually, you might be able to.
But that’s mostly a technicality; the correct interpretation/application of the theorem is of some controversy, you’re not obliged to expect us to be rational truthseeking agents, and I don’t think you can rationally expect us to expect you to be a rational truthseeking agent in any event.
That depends how false, and in what ways.
But note that no one else can do this experiment for you.
Not an unreasonable way to draw the lines.
I believe the distinction you want is “continuous” vs. “discrete”, rather than “quantitative” vs. “qualitative”.