The linked tool looks interesting; thanks for sharing!
I have not done more than skim through the list of configuration options so don’t have any good feedback for you (though don’t guarantee I could offer good feedback after any complete review and testing ;-) ). A couple of the options do seem to touch on my question here I think. The one’s related to medical and biotech. I think you’re approach is successful efforts in those areas that change the future state of a realized AIG. I think my question would best be viewed an intersection of developing AI/ML work and work in those areas.
I was trying to also provide an example as well but decided I should not just try to give an off the cuff type example so want to write something and then reread and probably rewrite. That’s probably setting expectations way too high but I did want to make sure I can clearly describe a scenario rather than just dump some stream of consciousness blob on you. Still, I did want to thank you for the link and response.
Thanks. I was somewhat expecting the observation that humans do have the ability to pretty much end things now, and have for some time, but as yet have not done so. I do agree. I also agree that in general we have put in place preventative measures to be sure those that might or are willing to end the world don’t have access or absolute ability to do so.
I think that intent might not be the only source, error and unintended consequences from using AI tools seem like they are part of the human risk profile. However, that seem so obvious I would think you have that baked into your assessment but just don’t mention that to keep a simple answer. I’m not sure how much that shifts balances though.
I did just have the realization that human based risk and AI risks are best thought of differently that I initially framed the question in my own mind. AI risk is much more like the risk to some other species due to human actions than the risk to humans due to human actions. That shift in view argues for the same assessment you offer.
I’m not sure what I think the relationship between AI enabling capabilities and the probability for some human, intentional or unintended, driven event looks like. I suspect that the probability increases with AI functionality. But I also think that points to two types of response. One is slowing or otherwise more cautiously proceeding with AI research—so dovetails well addressing AI risk efforts. But employing and extending existing social tools/institutions related to risk management would help reduce the risk while allowing research to proceed as is.
For instances, one reason I don’t think we’ve not seen nuclear doomsday is that no one person (that might not be true now with NK but know nothing about its nuclear protocols) actually has the ability to launch some all out attack. Both structural checks and the underlying personal checks are present. Are there AI risk mitigation parallels? (I assume so given I’ve seen some comments about AI mergers that seems to suggest that gets the AI around constraints protecting humans but don’t really know if that is a fair/useful characterization of such efforts.)
Thanks for noting the terminology, useful to have in mind.
I have a follow on comment and question in my response to Daniel that I would be interested in your response/reaction.
I’ve not read much in this area and have not even tried to follow through on the references provided in the OP. I’m open to being told I should have as some directly address my comment.
I think the big question to ask here, if one is advocating expanding the “socialist firm” (quotes since that can refer to a number of somewhat different structures) is conditions in which those forms are superior to the normal corporate structure. In other words, given all these forms exist in the current economic landscape the claim seems to be that we’re out of equilibrium and marginal adjustments will produce a better equilibrium position.
Worked for me a few minutes ago (~ 10:45 EST USA).
I’m wondering if one should not think of this as Bernie Madoff 2.0.
Thanks and I clearly missed the target of your posting. I sidetracked into the issue of how to chose one’s one preferred alternative when external constraints might be present that amount to choosing a lower valued immediate return rather than an longer term value.
I am a fan of teaching to fish but also knowing when that can actually be done. The later is clearly very important.
Maybe the question is not the best framing.
Maybe first ask your self just what we might mean by independent? It seems to me in the post you’re subtly shifting towards freedom from external constraints, which I don’t think is a fundamental aspect of independence.
Perhaps itemize your understanding of what criteria independence entails and then view that through the lens of degrees of freedom as the number or relationships (external constraints of a type) increases. I think developing the skills to navigate that problem space is one of the skills I see children needing to learn as part of becoming independent.
In terms of social coordination do you see rudeness and manners/courtesy as mirror/inverse tools? Is there some asymmetry present between the two in terms of social coordination mechanisms? Or are these really just two sides of the same coin and we can discuss coordination efficacy from either perspective?
If I follow that, another way of thinking about rudeness might be spending social/political capital?
I am tempted to down vote your response but have held off because I’m not able to get a good confirmation or answer to the questions I have. That said, my concerns with the response are:
Just because private labs don’t have to report to NIH that none, or even many, don’t.
A quick search seems to suggest multiple federal agencies are involved with lab safety at various levels.
It’s not clear if your complaint is really more about a particular database (NIH’s) that overall reporting of lab accidents Or perhaps put differently, about some consolidation of reporting databases.
Your hypothetical starts with the assumption that no reporting of accidents by private labs exists. It is not clear that is true.
Your reference to the Gates Foundation seems like it may be arguing from a special case and then attempting to generalize in appropriately.
Additionally it appears the lead off incident in The Intercept’s story is not actually a good example:
The needle pierced through both sets of gloves, but the student saw no blood, so she washed her hands, removed her safety equipment, and left the lab without telling anyone what had happened. Four days later, she ran a fever, and her body ached and convulsed in chills.
That is not a problem with reporting requirements (regardless of to which authority) but failing to follow reporting requirements. More regulation does not solve that problem.
Note, none of this is to say improvements are not possible, or perhaps even needed. But starting from an incomplete map sees like a good way to run the ship aground.
How much of AI alignment and safety has been informed at all by economics?
Part of the background to my question relates to the paperclip maximizer story. I could be poorly understanding the problem suggested by that issue (have not read the original) but to me it largely screams economic system failure.
Wondering if another take might be that for most people life under Ukraine or under Russia rule is largely the same. That would certainly make taking what one hears on TV—which ever source you’re being fed—as truth and so the views and values what represent the “good guy” side.
While this doesn’t change the basic conclusion from the data, does the data provide any data related to:
adults also injured in the accidents for the children 5 and under?
Age of the driver of the vehicle?
Have you just tried to keep track (or a log/journal) of the things you’ve said you felt were very poorly expressed and then worked on coming up with 5 (or just some random X number) ways to say it better. Then consider what context each of the alternative ways would best be suited for?
As others have said, jut pausing for a moment to form the complete though in you head before attempting to express it verbally will probably do a lot. It is not a skill/habit I think many are taught to develop. I know I was not and have the habit of talking more as stream of conscious than thinking then speaking. So if you find things that work for you I would be glad to see a follow up post on your efforts and success. Or perhaps the refutation of my thinking here.
I suspect one source of the challenge is that we anticipate what others are going to say. I think it is probably true to say that about 99% of the time we can correctly complete any simple statement someone is saying to us. However, the over all message is more than just a statement. (I think perhaps not quite right here but maybe) How complex a communication, in terms of needed statements, before the expected level of “hearing” has dropped to miscommunications? (Perhaps a case of missing the forest for the trees?)
Focus is perhaps another, we anticipate and then before the speaker has completed their thought we’ve already started the response in our head—there by not actually hearing the full claim and misunderstanding.
I also think culture and communication style may come into play. Some build up slowly so those who want the bottom line first will be frustrated and, perhaps, confused because they are being asked to do a lot more work to hear than they are used to. I think this might s symmetrical so giving the bottom line first and then putting the argument together may be hard to follow for those the like the slow, build to the end approach of communicating.
Last might be recognizing the relevance. For both the worried about offending and the terrible two year old something that is being said isn’t adding up to the claim—be it I’m offering you something you want (pizza) or I’m not offended. In the offense case it may be that the speaker (and those he interacts with) would never actually acknowledge offense, and in fact might deny it even if they then set out saying some things that they view as having to flow from such a sentiment (not sure what was in the blah blah blah exchange)
I am pretty sure most of the above are failing I have had in communication at times—and in some cases (like anticipating and not having the patient to actually listen until the end) might be called chronic bad habits that I try working to improve.
Is sexiness like beauty? In the eye of the beholder?
While I agree that most of what you’re suggesting will increase your attractiveness to some women the bigger question you might want to explore is at those the women you are interested.
I also have a sense that some of your view on what women find attractive is from a male perspective and, as such, a bit suspect. That said, I think you can follow your plan, but be sure that what you’re molding yourself into is actually who you are. Women are not (well, some might but do you want to date them?) going to find something that comes across as a front or facade sexy. You’ll get farther with being yourself, being comfortable with yourself and being genuine about it.
Shifting a bit, lots of diet theory out there and everyone’s metabolisms are a bit different. But I found that when I shifted to a more “eastern” diet with much more rice that I used to eat I actually started loosing weight. My thinking is that steamed rice (or at least rice cooker rice) is actually very low calorie for it’s bulk. So I think I felt like I was eating a lot but not taking in as many calories are before.I also found just using one of the health apps that help track caloric intake was helpful my own understanding of just how much I was consuming on any given day.
With regard to exercise I would suggest adding a low level cardio type portion to your routine as that (from what I’ve heard/read) is better for actually burning fat than a true cardio workout. Heart rate target is age dependent but I would guess low 130s is probably good for you. You’ll probably only need about 15-20 minutes 5 days a week.