Anyone remember a series of quick shorts posted by alexander scott (i think?) which includes a nation which includes prediction markets in its political process and a dictator takes over by opening a market that he will become the dictator in the next election cycle?
daijin
It’s been 15 years. Did you figure out how to be less scared?
The solution isn’t trying harder to be liked. It’s expanding your comfort with being disliked.
Social anxiety is an optimal response when there is a scarcity of other people to interact with. If you are meeting new 100 people every day, it doesn’t matter if 99 people dislike you, so long as you get another 100 new people tomorrow; because as long as you keep playing you will continue to gather people who like you.
If your total count of people to interact with is very small, then it suddenly becomes incredibly important to be not disliked, because you will quickly exhaust all your social prospects and be disliked by everyone.
I recently heard that thinking out loud is an important way for people to build trust (not just for LLMs) and this has helped me become more vocal. It has unfortunately not helped me become more correct, but I’m betting the tradeoff will be net positive in the long run.
go find people who are better than you by a lot. one way to quickly do this is to join some sort of physical exercise class e.g. running, climbing etc. there will be lots of people who are better than you. you will feel smaller.
or you could read research papers. or watch a movie with real life actors who are really good at acting.
you will then figure out, as @Algon has mentioned in the comments, that the narcissism is load-bearing, and have to deal with that. which is a lot more scary
game-theory-trust is built through expectation of reward from future cooperative scenarios. it is difficult to build this when you ‘dont actually know who or how many people you might be talking to’.
I did see the XKCD and I agree haha, I just thought your phrasing implied ‘optimize everything (indiscriminately)’.
When I say caching I mean retaining intermediate results and tools if the cost to do so is near free.
Nice. So something like grabbing a copy of swebench dataset, writing a pipeline that would solve those issues, then putting that on your CV?
I will say though that your value as an employee is not ‘producing software’ so much as solving business problems. How much conviction do you have that producing software marginally faster using AI will improve your value to your firm?
so you want to build a library containing all human writings + an AI librarian.
the ‘simulated planet earth’ is a bit extra and overkill. why not a plaintext chat interface e.g. what chatGPT is doing now?
of those people who use chatgpt over real life libraries (of course not everyone), why don’t they ‘just consult the source material’? my hypothesis is that the source material is dense and there is a cost to extracting the desired material from the source material. your AI librarian does not solve this.
I think what we have right now (“LLM assistants that are to-the-point” and “libraries containing source text”) serve distinct purposes and have distinct advantages and disadvantages.
LLM-assistants-that-are-to-the-point are great, but they
don’t exist-in-the-world, therefore sometimes hallucinate or provide false-seeming facts; for example a statement like “K-Theanine is a rare form of theanine, structurally similar to L-Theanine, and is primarily found in tea leaves (Camellia sinensis)” is statistically probable (I pulled it out of GPT4 just now) but factually incorrect, since K-theanine does not exist.
don’t exist in-the-world, leading to suboptimal retrieval. i.e. if you asked an AI assistant ‘how do I slice vegetables’ but your true question was ‘im hungry i want food’ the AI has no way of knowing that; and also the AI doesn’t immediately know what vegetables you are slicing, thereby limiting utility
libraries containing source text partially solve the hallucination problem because human source text authors typically don’t hallucinate. (except for every poorly written self-help book out there.)
from what I gather you are trying to solve the two problems above. great. but doubling down on ‘the purity of full text’ and wrapping some fake grass around it is not the solution.
here is my solution
atomize texts into conditional contextually-absolute statements and then run retrieveal on these statements. For example, “You should not eat cheese” becomes “eating excessive amounts of typically processed cheese over the long run may lead to excess sodium and fat intake”.
help AI assistants come into the world, while maintaining privacy
Another consequence of this is that inviting your friend to zendo is not weird, but inviting all your friends publically to zendo is.
‘Weirdness’ is not about being other from the group, it is about causing the ingroup pain, which happens to correlate to being distinct from the ingroup (weird). We should call them ingroup-pain-points.
Being loudly vegan is spending ingroup-pain-points, because being in front of someone’s face and criticising their behaviour causes them pain. Serving your friends tasty vegan food does not cause them pain and therefore incurs no ingroup-pain-points.
There is a third class of ingroup pain point that i will call ‘cultural pain point’. My working definition of ‘culture’ is ‘suboptimal behaviours that signal ingroup membership’. If you refuse to partake in suboptimal behavior, this does not cause you pain, but since you are now in a better position than others in the ingroup, you have now caused them pain. This is why you can be vilified for being vegan in certain ‘cultures’: you are being more optimal (healthier) relative to other people in a way that is (implicitly or explicitly) identified as a signalling-suboptimal-behaviour.
‘If some 3rd party brings that bird home to my boss instead of me, I’m going to be unwealthy and unemployed.’
Have you talked to your boss about this? I have, for me the answer was some combination of
“Oh but using AI would leak our code”
“AI is a net loss to productivity because it errors too much / has context length limitations / doesn’t care for our standards”
And that is not solvable by a third party, so my job is safe. What about you?
I recall a solution to the outer alignment problem as ‘minimise the amount of options you deny to other agents in the world’, which is a more tractable version of ‘mimimise net long term changes to the world’. There is an article explaining this somewhere.
How would you define ‘continued social improvement’? What are some concrete examples?
What is society? What is a good society vs a bad society? Is social improvement something that can keep going up forever, or is it bounded?
Please write a reply if you are downvoting me. I want to hear from you, you seem to have something to add.
What does ‘greedy’ mean in your ‘in short’? My definition of greedy is in the computational sense i.e. reaching for low hanging fruit first.
You also say ‘if (short term social improvements) become disempowered the continued improvement of society is likely to slow’, and ‘social changes that make it easier to continuously improve society will likely lead to continued social improvement’. This makes me believe that you are advocating for compounding social improvements which may cost more. Is this what you mean by greedy?
Also, have you heard of rolling wave planning?
Interesting, this implies a good deceiver has the power to determine another agent’s model and signal in a way that is aligned with the other’s model. I previously read an article on hostile telepaths https://www.lesswrong.com/posts/5FAnfAStc7birapMx/the-hostile-telepaths-problem which may be pertinent.
Superstimulus should be avoided because
it increases perceived opportunity cost, leading to indecision
It saturates one end of a complex system which can cause downstream parts of the system to fail
An outline is not a table of contents: an outline contains the full text of an article nested and tucked away, expandable on demand; whereas a table of contents contains a listing of text which still requires you to navigate to the actual text.
https://www.lesswrong.com/posts/n5TqCuizyJDfAPjkr/the-baby-eating-aliens-1-8
There’s an alien race in the above called the baby-eaters who eat their sentient offspring because this alien species is genetically unfortunate enough to produce thousands of immediately-sentient offspring at a time, and therefore they must cull their offspring.
More and more each day I think we are becoming like baby-eaters. Except of course we don’t murder our offspring by directly consuming them.
What is murder? What is consumption? It is the reclaiming of productive resource and the denial of future growth. When a poor person, having lived through years of their life giving what little they must to society in order to survive, dies on the street, there is another person that has been eaten by society.
We are eating each other, and often by the time someone has reached their preteens we know that they will be one of the ones that will be eaten. Therefore, in many ways, we are becoming baby-eaters.
However unlike the baby-eaters, not all of our society has normalized baby-eating to the point that we cannot see it as anything other than unquestionably good.
What would we ask of the baby-eaters? How should we act today?