Of the three examples, only “rationality” came to mind : (
I’m ESL so that sorta ruins certain idiom based examples for me.
Of the three examples, only “rationality” came to mind : (
I’m ESL so that sorta ruins certain idiom based examples for me.
I find that merge sort starts becoming useful for items as few as a deck of cards. But bucket sort is probably better anyways
I love your analysis. What do you think about this summary? : The solution to this optimization problem is to be the kind of agent that chooses only one box.
I love how <3 has the dual interpretation of a heart and a fart.
Is there a particular reason for people to prefer eating animals who eat low quality food? Health, nutrition, etc?
It basically said “thanks I hate it” in response to that joke
One thing notably missing from this post is the analysis about severity and how different countermeasures can make the virus evolve in different ways. E.g. if we stop people with fevers from travelling, it would give a no-fever variant a lot of advantage over a fever -causing variant.
I went ahead and took a look! I am actually very new to the community and not at all aware of this.
I have some thoughts on this and I would love it if you would hop on a zoom call with me and help brainstorm a bit. You can find me at cedar.ren@gmail.com
Others are welcome too! I’m just a little lonely and a little lost, and would love to chat with people from lesswrong about these ideas
About footnote #5 on arranged vs love marriages...
While the distribution for compatibility when finding a partner is long tailed, if the distribution for marginal gain on investing into an existing relationship is also as long tailed, then the comparative advantage sorta cancels out and you’re not as much better off looking harder vs developing the relationships you already have.
This sorta fits my experience too, where being vulnerable and putting in the work is occasionally given me large improvements in relationship satisfaction that I never expected based on the median improvement I get for each unit of work.
I recently found InfoTopo https://sites.google.com/view/pierrebaudot/menu/infotopo-software
And it was so weird and exciting I immediately started being scared of losing it, and promptly archived it with the waybackmachine. https://web.archive.org/web/20220328135016/https://sites.google.com/view/pierrebaudot/menu/infotopo-software
But what this package does is it lets you do cool and weird things visualizing the informational relationships between different variables, discrete or complete.
I’m a bit overwhelmed in my life currently, but I would love it if somebody could take a look at it and comment, and maybe draw some connections with Judea Pearl’s works. I feel like I’m on the threshold of something great and awesome but I’m kindda stuck trying to understand this and not knowing where my attention should start.
I think we’re talking about how the rules are generated and how they are interpreted and followed here. Basically, where the bugs come from.
I misread “organisms” as “organizations”.
And I feel like it actually does still apply somewhat, in the sense that the ideas passed down from team to team is the actual “DNA” whereas the behavior of the organization are determined by that but don’t direct feed back to that.
From what it looks like so far, recommender systems are amazingly good at figuring out what we want in the short term and giving them to us. But that is often misaligned to what we want in the longer terms. E.g. I have a YouTube shorts addiction that’s ruining my productivity (yay!). So my answer for now is NOPE, unless we do something special.
I’m assuming when you say “human values” you mean what we want for ourselves in the long-term. But I would love it if you would elaborate on what exactly you meant by that.
Ty John for writing this up. This post and the comments really helps me find my own place and direction in terms of doing what I want to do.
Currently in academia and VERY unhappy about the bullshit I have to ingest and create. But I’m still waiting on my social, political, and financial safety nets before I can do anything remotely brave, kindda like tryactions mentioned in his comment.
So the most I’ll do is probably just read and write and talk to people on the side.
Speaking of talking to people...
My current research involves (manually) using CPU architectural artifacts to break sandboxing and steal data. I’ve been wondering whether I could do something on the lines of “make a simple AI that tries to break out of sandboxes, then make an unbreakable sandbox to contain it”.
Do shoot me a message if you have any thoughts, or is just curious. I would love to chat.
I feel like there’ll be a better way to say this sentence once we figure out the answer to your first question,
It most definitely seems to make sense to say that systems can have goals, in a “if it looks like a duck it makes sense to call it a duck” kind of way. But at the same time, every single piece hasn’t been tweaked to the same end goal as the system. They are each tweaked towards their own survival, and that’s somewhat aligned with the system’s survival.
Something I wish there are more lesswrong posts for (or at least I wish I’ve seen more lesswrong posts for) is posts exploring alignment in the context of :
Organisms and their smaller replicator components (organisms < cells, cells < transposons& endoviruses & organelles)
Social thingies and their smaller sorta-replicator components (religions < religious ideas, companies < replicating Management ideas)
If you have your favorite post that falls into the above genre or mentions something to that effect, please absolutely link me to it! I’d love to read more.
I’m only halfway through the A-Z sequence, so I’d also very much appreciate it if you could point to things in there to get me excited about progressing through it!