astrocytes can fill with calcium either because of external stimuli or when their own calcium stores randomly leak out into the cell, a process which resembles the random, unprovoked nature of anything that’s random.
ocr-fork
They remember being themselves, so they’d say “yes.”
I think the OP thinks being cryogenically frozen is like taking a long nap, and being reconstructed from your writings is like being replaced. This is true, but only because the reconstruction would be very inaccurate, not because a lump of cold fat in a jar is intrinsically more conscious than a book. A perfect reconstruction would be just as good as being frozen. When I asked if a vitrified brain was conscious I meant “why do you think a vitrified brain is conscious if a book isn’t.”
Science was seduced by statistics, the math rooted in the same principles that guarantee profits for Las Vegas casinos.
I winced.
If you’re trying to outpaperclip SEO-paperclippers you’ll need a lot better than that.
I doubt LessWrong has any competitors serious enough for SEO.
Yudkowsky.net comes up as #5 on the “rationality” search, and being surrounded by uglier sites it should stand out to anyone who looks past Wikipedia. But LessWrong is only mentioned twice, and not on the twelve virtues page that new users will see first. I think you could snag a lot of people with a third mention on that page, or maybe even a bright green logo-button.
But this taxonomy (as originally described) omits an important fourth category: unknown knowns, the things we don’t know that we know. This category encompasses the knowledge of many of our own personal beliefs, what I call unquestioned defaults.
Does anyone else feel like this just a weird remake of cached thoughts?
There’s a lot of stuff about me available online, and if you add non-public information like the contents of my hard drive with many years worth of IRC and IM logs, an intelligent enough entity should be able to produce a relatively good reconstruction.
That’s orders of magnitude less than the information content of your brain. The reconstructed version would be like an identical twin leading his own life who coincidentally reenacts your IRC chats and reads your books.
Is a vitrified brain conscious?
I think I understand better now.
Your proposal seems to involve throwing out “sophisticated mathematics” in favor of something else more practical, and probably more complex. You can’t do that. Math always wins.
The problem with math is that it’s too powerful: it describes everything, including everything you’re not interested in. In theory, all you need to make an AI is a few Turing machines to simulate reality and Bayes theorem to pick the right ones. In practice this AI would take an eternity to run. Turing machines live in a world of 0s and 1s, but we live a world made of clouds and birds, and a machine that talks in binary about clouds and birds would be complicated and hard to find. For a practical AI, you need a model of computation that regards nouns, verbs and people as the building blocks of reality, and regards Turing machines as very weird examples of nouns. This model would perform worse than a Turing machine if presented with a freakish alternate universe with no concept of time or space, but otherwise it’s fine. The hard part is compromising between simplicity and open-mindedness.
The same applies to neural networks. In theory, the shape can be anything you like as long as it’s big enough.(I’m leaving out a lot of details here, sorry.)Math is just the general framework that you build reality inside.
Empirical methods are upside down. You’re starting with the gritty details, hoping that as everything piles up something more powerful than bayesian inference will emerge. That won’t happen. Instead you’ll get a lousy, brittle copy of bayesian inference that can’t handle anything too different from what it was designed for… like a human.
(Edited for grammar)
How much more information is in the ontogenic environment, then?
Off the top of my head:
The laws of physics
9 months in the womb
The rest of your organs. (maybe)
Your entire childhood...
These are barriers developing Kurzweil’s simulator in the first place, NOT to implementing it in as few lines of code as possible. A brain simulating machine might easily fit in a million lines of code, and it could be written by 2020 if the singularity happens first, but not by involving actual proteins . That’s idiotic.
The first two questions aren’t about decisions.
“I live in a perfectly simulated matrix”?
This question is meaningless. It’s equivalent to “There is a God, but he’s unreachable and he never does anything.”
I’m really confused now. Also I haven’t read Permutation City...
Just because one deterministic world will always end up simulating another does not mean there is only one possible world that would end up simulating that world.
Of course. It’s fiction.
First, the notion that a quantum computer would have infinite processing capability is incorrect… Second, if our understanding of quantum mechanics is correct
It isn’t. They can simulate a world where quantum computers have infinite power because because they live in a world where quantum computers have infinite power because...
Since that summer in Colorado, Sam Harris, Richard Dawkins, Daniel Dennett, and Christopher Hitchens have all produced bestselling and highly controversial books—and I have read them all.
The bottom line is this: whenever we Christians slip into interpreting scripture literally, we belittle the Bible and dishonor God. Our best moral guidance comes from what God is revealing today through evidence, not from tradition or authority or old mythic stories.
The first sentence warns agains taking the Bible literally, but the next sentence insinuates that we don’t even need it...
He’s also written a book called “Thank God for Evolution,” in which he sprays God all over science to make it more palatable to christians.
I dedicate this book to the glory of God. Not any “God” me may think about , speak about , believe in , or deny , but the one true God we all know and experience.
If he really is trying to deconvert people, I suspect it won’t work. They won’t take the final step from his pleasant , featureless god to no god, because the featureless one gives them a warm glow without any intellectual conflict.
Which will be soon, right?
- 20 May 2010 18:20 UTC; 8 points) 's comment on Development of Compression Rate Method by (
Your surviving friends would find it extremely creepy and frustrating. Nobody would want to bring you back.
Level 558 runs the simulation and makes a cube in Level 559. Meanwhile, Level 557 makes the same cube in 558. Level 558 runs Level 559 to it’s conclusion. Level 557 will seem frozen in relation to 558 because they are busy running 558 to it’s conclusion. Level 557 will stay frozen until 558 dies.
558 makes a fresh simulation of 559. 559 creates 560 and makes a cube. But 558 is not at the same point in time as 559, so 558 won’t mirror the new 559′s actions. For example, they might be too lazy to make another cube. New 559 diverges from old 559. Old 559 ran 560 to it’s conclusion, just like 558 ran them to their conclusion, but new 559 might decide to do something different to new 560. 560 also diverges.. Keep in mind that every level can see and control every lower level, not just the next one. Also, 557 and everything above is still frozen.
So that’s why restarting the simulation shouldn’t work.
But what if two groups had built such computers independently? The story is making less and less sense to me.
Then instead of a stack, you have a binary tree.
Your level runs two simulations, A and B. A-World contains its own copies of A and B, as does B-world. You create a cube in A-World and a cube appears in you world. Now you know you are an A-world. You can use similar techniques to discover that you are an A-World inside a B-World inside another B-World.… The worlds start to diverge as soon as they build up their identities. Unless you can convince all of them to stop differentiating themselves and cooperate, everybody will probably end up killing each other.
You can avoid this by always doing the same thing to A and B. Then everything behaves like an ordinary stack.
Then it would be someone else’s reality, not theirs. They can’t be inside two simulations at once.
Then they miss their chance to control reality. They could make a shield out of black cubes.
Suicide rates start at .5 in 100,000 for ages 5-14 and rise to about 15 in 100,000 for seniors.