I’d like to hear it! Even if I am the brunt of the joke.
Monkeymind
Thank you very much for your response and your offer.
Came here doing research on QM and decided to try out some ideas. I learn to swim best by jumping right in over my head. My style usually doesn’t win me many friends, but I recognize who they are pretty fast, and I learn what works and what doesn’t.
Someone once called me jello with a temper....but I’m more like a toothless old dog, more bark than bite. The tough exterior has helped me in many circumstances.
On the first day as a new kid in high school, I walked up to the biggest, baddest senior there, with all his sheep gathered around him in the parking lot, and slapped him upside his head a hard as I could. Barely had an effect.! He could have crushed my little body with one hand, but instead he laughed so hard he nearly broke a rib. No one ever messed with me because he put the word out -hands off his little buddy, and of course I also gained the reputation of one crazy SOB!
Being retired, I have a lot of time on my hands, and I am interested in learning as much as I can before I become worm food. Right now my interest is GR, QM and AI, but I don’t understand what I know about it!
I have a request, I just returned from the V.A. Hospital. My doctor says I need cataract surgery.
I am having a hard time making a decision on what to do. How would Bayesian probability theorem or decision theory help me make a decision based upon the following information? If you would use this in your decision making process, I am willing to use it in mine. I’m stumped and the doctor’s have given bad advice many times over the years anyways.
There are inherent risks of infection, failure and loss of eyesight. I could have my right eye done right away (it’s ripe) but it could possibly wait a year. However, at that time I will need to have cataract surgery in my left eye as well (couple of weeks apart). I prefer not to have both eyes done at the same time.
An injury in 06 caused a retinal detachment in my right eye. I may be having a retinal detachment in my left eye (I am having flashing lights similar to like b4 my right eye detached). It took a couple of months before the occlusion started last time (after the flashing lights began). An occlusion is like an eclipse of grey. If it makes it all the way accross you are blind. The doctor couldn’t see signs of detachment, but cautions me to get there right away if the occlusion begins. Once occlusion starts, surgery needs to happen within 24-72hours. Success diminishes rapidly after 24 hours.
I am a high risk for retinal detachment because of severe myopia (near-sightedness). The right eye surgery was pneumatic retinoplasty, and so I have increased risk of detachment or other problems with cataract surgery.
I am writing a novel and want to finish it b4 the surgeries because of potentially months downtime, and in case of problems or permanent loss of eyesight in one or more of my eyes.
The Doctor says that it is my chioce to wait up to a year, but that I need to be watchful for signs of my left eye detaching, and I don’t want my right cataract to get too hard, which increases risk of detachment and lowers success rate from cataract surgery.
Thanx!
BTW, I will go by whatever the house rules are. I am not here to be argumentative or disagreeable. I am here to learn. I do not argue for the sake of argument. I argue to become Less Wrong!
Originally I thot this was a physics forum. I came to this thread and got into the discussion w/o reading through the website. My bad! I have tapped out of the thread and will leave it alone. If you must censor me can you please delete all my posts, to be fair. It is hard enough to get people not to take things out of context as it is.
How can you be 100% confident that a look up table has zero consciousness when you don’t even know for sure what consciousness is?
Why not just define consciousness in a rational, unambiguous, non-contradictory way and then use it consistently throughout. If we are talking thought experiments here, it is up to us to make assumption(s) in our hypothesis. I don’t recall EY giving HIS definition of consciousness for his thought experiment.
However, if the GLUT behaves exactly like a human, and humans are conscious, then by definition the GLUT is conscious, whatever that means.
Good enough. Since I am demonstrating intellectual honesty and openess by reading everything you ask, can we agree to this? If I say I understand it but but that I disagree, will you then try to make the case for this half-silvered mirror experiment, or are we done?
Thank you for your patience. It is really appreciated.
If this is where we are at, then please advise me of it so I can take appropriate steps to avoid getting banned. I don’t think I got an explicit statement of banning mode having been triggered, but I want to be sure since there is talk of the ban hammer going down.
I am not trying to be contrary, I just am, so it comes out that way. I truly think that what I have to offer has merit. Of course, if the community does not think so then it is within their power to down vote me into non-existence. That is fair, as I have no right to force my ideas on another person. It just seemed that this would be a place to share ideas. Perhaps not.
Are you asking me not to post anywhere in the community or just this thread?
Thanx for your comment!
Although it is long, as it broadly covers many blogs in a sequence on how to change your mind, I disagree about it being off topic.
However, let me remind you that it is others that have kept it “off topic” not I. Others requested that I read other blogs in order that I might see more clearly where the author was coming from and the main thrust of Less Wrong.
Yet, it is not entirely off topic, it is just that we are dealing with a very broad subject, Quantum Mechanics, and attempting to approach it from a rationalist pov. The specific topic is Amplitudes and Configurations. The foundational principles of the experiment are flawed. I have attempted to point this out. We can return to the actual experiment once we have agreed on the basic assumptions.
x
If the evolutionary process results in either convergence, divergence or extinction, and most often results in extinction, what reason(s) do I have to think that this 23rd emerging complex homo will not go the way of extinction also? Are we throwing all our hope towards super intelligence as our salvation?
Humans have a values hierarchy. Trouble is, most do not even know what it is (or, they are). IOW, for me honesty is one of the most important values to have. Also, sanctity of (and protection of) life is very high on the list. I would lie in a second to save my son’s life. Some choice like that are no-brainers, however few people know all the values that they live by, let alone the hierarchy. Often humans only discover what these values are as they find themselves in various situations.
Just wondering… has anyone compiled a list of these values, morals, ethics… and applied them to various real-life situations to study the possible ‘choices’ an AI has and the potential outcomes with differing hierarchies?
ADDED: Sometime humans know the right thing but choose to do something else. Isn’t that because of emotion? If so, what part does emotion play in superintelligence?
If you don’t know for sure what consciousness is, you define it as best you can, and proceed forward to see if your hypothesis is rational and that the theory is possible. If you define conscious as made of cells, then everyone knows right away a GLUT is not conscious (that is, if it is not made of cells) by YOUR def. and tells you, you are being irrational, please go back to the drawing board!
OK, thanx! but can you answer this? Are amplitude configurations the boundries in my analogy?
Individual configurations don’t have physics. Amplitude distributions have physics.
So the parts have no physical presence, but the whole does?
People are a lot more complicated than neurons, and it’s not just people that are connected to the internet—there are many devices acting autonomously with varying levels of sophistication, and both the number of people and the number of internet connected devices are increasing.
FYI …A recent study by Cysco (I think) says something like:
The internet is currently around 5 million terabytes with 75 million servers world wide. On average, one billion people use the internet per week. Internet use consumes enough information per hour to fill 7 million DVDs and growing, so an internet AI would need the capabilities of handling 966 exabytes of information by 2015.
An Exabyte is 1,000,000,000,000,000,000 bytes. Every word ever spoken by human beings could be stored in 5 exabytes.
Counting smart phones, robotic arms, cameras, GPS systems, clocks, home security systems, personal computers, satellites, cars, parking meters, ATMs, and everything else, there are more things connected to the internet than there are human beings on the planet. In a few years there will be over 50 billion with enough possible internet connections for 100 connections for each atom comprising the surface of the earth.
x
x
I plus thee for humor! That’s what I thot. Now how many of these makes up a one inch line?
If intelligence is the ability to understand concepts, and a super-intelligent AI has a super ability to understand concepts, what would prevent it (as a tool) from answering questions in a way so as to influence the user and affect outcomes as though it were an agent?