M. Y. Zuo(Michael Y. Zuo)
Some reflections on the LW community after several months of active engagement
Google announces ‘Bard’ powered by LaMDA
Vehicle Platooning—a real world examination of the difficulties in coordination
Letter from leading Soviet Academicians to party and government leaders of the Soviet Union regarding signs of decline and structural problems of the economic-political system (1970)
[Question] When building an organization, there are lots of ways to prevent financial corruption of personnel. But what are the ways to prevent corruption via social status, political power, etc.?
Thanks for taking the time to write out these reflections.
I’m curious about your estimates for self driving cars in the next 5 years, would you take the same bet at 50:50 odds for a 2028 July date?
Can you lay out step by step, and argument by argument, why that should be the case in a real world legal system like the US?
It seems very far from currently accepted jurisprudence and legal philosophy.
Unless the post was edited afterwards I think the last link:
DeepMind reportedly lost a yearslong bid to win more independence from Google
“One suggestion from DeepMind’s founders was apparently for the company to have the same legal structure as a nonprofit, ‘reasoning that the powerful artificial intelligence they were researching shouldn’t be controlled by a single corporate entity, according to people familiar with those plans.’ But Google wasn’t on board with this, telling DeepMind it didn’t make sense considering how much money the company has poured into DeepMind.”
Is suggesting exactly that.
I hesitate to be the first to respond here but it seems there is a point so strange that someone else must have noticed as well so I hope it can be clarified. That is since the main problem your work is tackling is:
“how can we regulate a very complex and very smart system with unpredictable emergent properties using a very simple and dumb system whose properties once created are inflexible”
There must be then some hard constraint for AI work as well that involves some ‘simple and dumb system whose properties once created are inflexible’. Which does not seem to be inevitable.
Utility and objective functions don’t have to follow that kind of description, it is only assumed in certain projections.
If in the future some world compact decided, for example, that some hard coded objective function must be continuously executed and also be very difficult to change, then it seems plausible, but that is by no means preordained. Since there is no clear consensus that such a system can even be maintained perpetually.
Bingo, the root problem is pretending to have any quasi-judicial structure/authority at all.
People of roughly equal status issuing ‘judgements’ or ‘decisions’ on each other really doesn’t make sense for that reason, at best you can do so within a private club and its property lines.
A federation of private clubs may decide to do so, very rarely, only for the most serious cases, because as mentioned in the OP there’s always the risk of some clubs siding with the accused and then deciding to leave, splitting the federation.
Is there a compiled list of what the LTFF has accomplished and how that compares to past goals and promises, if any, made to previous donors?
I know some potential donors that would be more readily convinced if they could see such a comparison and reach out to past donors.
Yeah this seems a bit of a self-defeating exercise.
Who made the decision to go ahead with this method of collecting signatories?
They didn’t even have a verification hold on submitted names...
It does seem like a straightforward conclusion that Eliezer didn’t really understand what he was writing about then.
If so, publishing a revised version with the necessary changes seems like the sensible choice. Especially since the example is frequently referenced throughout.
One other thing I could never get them to do was to ask questions. Finally, a student explained it to me: “If I ask you a question during the lecture, afterwards everybody will be telling me, `What are you wasting our time for in the class? We’re trying to learn something. And you’re stopping him by asking a question’.” It was a kind of one-upmanship, where nobody knows what’s going on, and they’d put the other one down as if they did know. They all fake that they know, and if one student admits for a moment that something is confusing by asking a question, the others take a high-handed attitude, acting as if it’s not confusing at all, telling him that he’s wasting their time.
I explained how useful it was to work together, to discuss the questions, to talk it over,
but they wouldn’t do that either, because they would be losing face if they had to ask someone else. It was pitiful! All the work they did, intelligent people, but they got themselves into this funny state of mind, this strange kind of self-propagating “education” which is meaningless, utterly meaningless!Feynman
An all too common folly.
Here’s a few, unordered:
As We May Think, by Vannevar Bush
Politics and the English Language, by George Orwell
The Tyranny of Structurelessness, by Jo Freeman
Some Moral and Technical Consequences of Automation, by Norbert Wiener
Can we Survive technology?, by John von Neumann (though I may be a bit biased here as I’ve had personal interaction with one of his family members)
Analysis of GPT-4 competence in assessing complex legal language: Example of Bill C-11 of the Canadian Parliament. - Part 1
Public assessments of existing generative AI systems. The Administration is announcing an independent commitment from leading AI developers, including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI, to participate in a public evaluation of AI systems, consistent with responsible disclosure principles—on an evaluation platform developed by Scale AI—at the AI Village at DEFCON 31. This will allow these models to be evaluated thoroughly by thousands of community partners and AI experts to explore how the models align with the principles and practices outlined in the Biden-Harris Administration’s Blueprint for an AI Bill of Rights and AI Risk Management Framework.
I don’t know anything about the ‘evaluation platform developed by Scale AI—at the AI Village at DEFCON 31’.
Does anyone know if this is a credible method?
I’d be willing to take a bet that the U.S. will not respond with nuclear retaliation against Russia, regardless of what Russia or any of its governmental actors do, for a 1 year period. If you believe there’s any chance.
Does it matter for Villiam’s point whether 10x more Palestinian civilians are killed then Israeli, or 20x, or 30x?
For example, even if it’s perceived by inflated reports to be 30x, and the real figure is 10x, that’s still 10x more civilians in body bags.
If we expect pareto distribution to apply then the folks who will really move the needle 10x or more will likely need to be significantly smarter and more competent than the current leadership of MIRI. There likely is a fear factor, found in all organizations, of managers being afraid to hire subordinates that are noticeably better than them as they could potentially replace the incumbents, see moral mazes.
This type of mediocrity scenario is usually only avoided if turnover is mandated by some external entity, or if management owns some stake, such as shares, that increases in value from a more competent overall organization.
Or of course if the incumbent management are already the best at what they do. This doesn’t seem likely as Eliezer himself mentioned encountering ’sparkly vampires’ and so on that were noticeably more competent.
The other factor is that now we are looking at a group that at the very least could probably walk into a big tech company or a hedge fund like RenTech, D.E. Shaw, etc., and snag a multi million dollar compensation package without a sweat, or who are currently doing so. Or likewise on the tenure track at a top tier school.