I agree. I was just trying to motivate my rant.
is4junk
When Roomba came out I expected vast progress by now. Some company would actually make one that works all the time for the whole house. Now I am not second guessing the the IRobot corporation—maybe they could do it but the market is happy now. How hard is it with today’s know how to make one that
doesn’t get stuck on rugs, cords, clothes, or under things ever
can remember where it needs to clean and you don’t have to use virtual walls
can remember how to get back to its docking station before its battery runs out every single time
make a docking station where it can drop off its dirt so I don’t have to check it more then once a month
Its stuff like this that makes me wonder how much progress we are actually making. Is it a solved problem with no market (at the price point) or is it a problem in robotics?
The actual function of Karma as you describe doesn’t bother me. I’ll continue voting as usual. The anti-kibitzing option just hides the votes so I don’t see them. For me I hope out of sight out of mind actually works for this problem.
I used to think this Karma Score stuff would be helpful to filter low quality posts. But I see many people get downvoted for tribal reasons and I also see many upvotes on posts that I have trouble deciphering (sockpuppets?). So usually, when I see a post downvoted to oblivion I end up clicking on it anyways which defeats the whole purpose of using the Karma Score to help me filter out bad posts. I also waste a bunch of cycles wondering about the votes (who are these people).
TL;DR I have decided to try using firefox to view lesswrong with the anti-kibitzing option turned on (see preferences).
Thanks, this is helpful
Does anyone know if any companies are applying NLP to software? Specifically, to the software ASTs (abstract syntax trees)?
I have been playing around with unfolding autoencoders and feeding them Python code but if there are researchers or companies doing similar I’d be interested in hearing about it.
Robotics will get scary very soon. quoted from link:
The conference was open to civilians, but explicitly closed to the press. One attendee described it as an eye-opener. The officials played videos of low-cost drones firing semi-automatic weapons, revealed that Syrian rebels are importing consumer-grade drones to launch attacks, and flashed photos from an exercise that pitted $5,000 worth of drones against a convoy of armored vehicles. (The drones won.) But the most striking visual aid was on an exhibit table outside the auditorium, where a buffet of low-cost drones had been converted into simulated flying bombs. One quadcopter, strapped to 3 pounds of inert explosive, was a DJI Phantom 2, a newer version of the very drone that would land at the White House the next week.
My view is that the company knows what the job is worth, and the applicant does not...
Is this a problem now a days with sites like glassdoor? Or maybe some industries are not well represented.
When interviewing if you can get multiple job offers then you can play them off each other (in some industries). I don’t have any experience with government work though.
I mean it in this non-flattering sense rent-seeking.
I envision all sorts of arbitrary legal limits imposed on AIs. These limits will need people to dream them up, evangelize the needs for even more limits, and enforce the limits (likely involving creation of other ‘enforcer’ AIs). Some of the limits (early on) will be good ideas but as time goes on they will be more arbitrary and exploitable. If you want examples just think of what laws they will try to stop unfriendly AI and stop individuals from using AI to do evil (say with an advanced makerbot).
Once you have a role in the regulatory field then converting it to fun and profit is a straight forward exercise in politics. How many people are in this role is determined by how successful it is at limiting AIs.
Why not try to exploit the singularity for fun and profit? Its like you have an opportunity to buy Apple stock dirt cheap.
Investment: own data center stocks initially. I am not sure what you would transition to when the AI learns to make CPUs.
Regulatory: make the singularity pay you rent by being a gatekeeper. This will be a large industry worldwide. Probably the best bet.
At the very least you should be able to rule out bad investments (time or money).
Energy
Land
Jobs that will be automated
I would think most people change their minds on these topics but would simply lie about 1 & 2. There are several threads about religious people turning atheist using this strategy.
I think the grand thing difficulty is that a change would require a large personal commitment if they wanted to be self-consistent. The difficulty is laziness - ‘I’d have to rethink everything’ or even worse ‘I’d be evil to think that’.
Are you worried about his ethics or is he making a mistake in logic?
The columnist says “This opinion is not immoral. Such choices are inevitable. They are made all the time.” Is that the part you disagree with?
It would depend on how bad travel was without cars yesterday. Historically, it was horses which must have been really bad. I think if they knew back then about speeds, traffic, and conditions they still would have done it. Parts of China and India have proved it quite recently (last 50 years).
Now if we had most people in high density housing, good transport (both public and private), and online ordering/delivery then maybe cars would be very restricted.
I mean unfriendly in the ordinary sense of the word. Maybe uninviting would be as good.
Perhaps a careful reading of that disclaimer would be friendly or neutral—I don’t know. My quick reading of it was: by interacting with AI Impacts you could be waiving some sort of right. To be honest I don’t know what a CCO is.
I have nothing further to add to this.
The hassles of flying these days have made buses more popular. For a Seattle to Portland trip I would consider it if we didn’t have a train.
If you take a look and have thoughts, we would love to hear them, either in the comments here or in our feedback form.
My comment is intended as helpful feedback. If it is not helpful I’d be happy to delete it.
I am not sure. A quick search on LessWrong only lead me to Meet Up: Pittsburgh: Rationalization Game
What I am proposing would be more of an exercise in argument structure. Either the ‘facts’ are irrelevant to the given argument or there are more ‘facts’ needed to support the conclusion.
In college, I had a professor ask us to pick any subject, make-up any ‘facts’, and try to make a compelling argument. He then had us evaluate others peoples essays. Let’s just say I wasn’t impressed with some of my fellow classmate’s arguments.
Sometimes you see this in the courtroom as a failure to state a claim
Would it be interesting to have an open thread where we try this out?
[pollid:814]
Looking at the very bottom of AI Impacts home page—the disclaimer looks rather unfriendly.
I’d suggest petitioning to change it to the LessWrong variety
Here is the text: To the extent possible under law, the person who associated CC0 with AI Impacts has waived all copyright and related or neighboring rights to AI Impacts. This work is published from: United States.
From a quality of life POV, I would think that joint replacement (knee, hip, elbow) would be a huge improvement for many people. Outside of organ growing is there any progress on growing joints?