conspi-rationalist
Coafos
I think a useful concept would be the colour of bits. For example, a digital song can be bought on a CD or downloaded from the internet. The computer does not see a difference between them, because it just sees a number, but in the eye of the law, one of them is legal, the other is not.
The number on the CD is coloured “green”, the downloaded number is coloured “red”. Green numbers are legal, but red numbers are not. If you upload a song a from CD, it will be red because you can only send red numbers. However, if the studio produces a new CD, it will have green numbers because they have the copyright to the song.
Anyone can copy a digital artwork because it is just a number, but the copied number will be “yellow” coloured. With an NFT you do not buy a number, you buy the right to make this number “blue”. This right can worth a lot of money if a blue number has a higher value than a yellow number.
I do not think that the prototypical scientific method is not valuable in the long term.
In any experiment, there are lots of naturally varying parameters (current phase of the Moon, air pressure, amount of snow on the slope), and there are lots of naturally constant parameters (strength of gravity, room temperature, amount of hydroxyhypotethicol in the solution). There are base and derived parameters. The distances from the sun and the orbital periods vary between the planets, but (distance)^3/(orbital period)^2 is constant.
In the experiment, you measure X and Y. If X vary, but Y is constant, then they probably have no relation. Suppose that we want to find out that is X related to B or C. We control B to vary, and set C to a constant. If X vary, then it is not connected to C, if X is constant, then it is unrelated to B.
In the second scenario, you try to find the minimal set of base parameters that are related to X (growth rate). After some testing, we found that (growth rate) ~~ (initial age). After we found that connection, we can rule out the uncontrolled varying parameters, but there may be a connection between X and an uncontrolled constant parameter. It is possible that (growth rate) ~~ (initial age) times (1 + (amount of hydroxyhypotethicol)), and the first scenario will test these kinds of connections.
It is not enough to find which parameters won’t affect the experiment. It is also important to find out which parameters could affect the experiment.
Interesting post, abstractions are the few stable-ish quantities that weren’t eaten by chaotic noise.
Exponentially growing errors are not always chaotic. Suppose you have around 1000 starting cells, with a 1% error in the population size. The number of cells doubles in each hour. The absolute error of the population size can be 10.24 times larger than the initial population 10 hours later; however, the relative error remained 1%. (The billiard ball example is still chaotic, but the tilde character does the heavy lifting: 31.4 with 10% error is an imprecise but usable measurement, sin(31.4 +- 10%) is garbage.)
If the relative error of a quantity remains bounded as the elements of a system interact, then this value could be a useful abstraction.
One of my favourite Gettier-like problems is about black holes.
Say you have a very dense star. It is so dense, that the gravitational force on its surface is capable of pulling back even the particles of its light, leaving only a black hole in the sky. How large can it be with a given mass?
It’s an easy exercise using Newtonian mechanics. Take a light particle with mass . Its gravitational energy at a distance is , and its kinetic energy is at the start. If the total energy is negative, then the path of the light particles will stay within a boundary. Therefore, the answer to the question is , if the object is smaller than this, then it will be a black hole.
Of course, for that dense objects, Newtonian predictions break down. We should care about curved spacetime and use general relativity in our calculations. The answer (to my knowledge) is the Schwarzschild radius, which is .
Have you heard about Infra-Bayesianism?
If I get it correctly, the core idea is that “consider every possible scenario, use a maximin policy while caring about conterfactual branches”, which is very similar to the idea presented in the linked post. The “Nirvana trick” in the other post is similar to just eliminating branches/cells, where the agent would take a different action from the predicted policy.
Non-Nashian Game Theory is Pareto optimal, Infra-Bayesianism implements Updateless Decision Theory. If the two are connected, that could mean that UDT and Pareto-optimality are connected too.
In section 1: When the ‘affine’ word is commonly used, the mixture coeffecient (denoted there by ) can be any real number. When , then it’s still an affine combination, but a more precise tern could be ‘convex combination’.
However, as you work with function spaces, convexity in a function space is different then being a convex function, so maybe some new notation should be introduced.
Once every month. (at least as I know it)
Not like it gonna matter (<100), but if it did, I don’t want future me to do the funni.
Great story. I haven’t thought you could cross steampunk and singularity, but it kind of works.
(Ep. vibes: I went to few EA cons, and subscribed to the forum digest.)
I blame EA. They were simply too successful.
There are the following effects at play:
Bad AI gonna kill us all :(
Preparing for emergent threats is one of the most effective ways to help others.
The best way to have good ideas is to have a lot of ideas; and the best way to have a lot of ideas is to have a lot of people.
Large funnels were built for new AI Safety researchers.
The largest discussions about the topic happened at LW and rat circles.
The general advice I heard at EA conferences in late Feb/Mar (
notice the spike! it’s March, before the big doompostedit: it’s really after the doompost, I misread the graphs) is that you should go to LW for AI-specific stuff.
What a coincidence that the AI-on-LW flood and the cries for the drop in EA Forum quality happened at the same time. I think with the EA Movement growing exponentially in numbers, both sites are getting eternal septembered.
I think the solution could be to create a new frontpage for ai related discussions, like “personal blog”, “LW frontpage”, “AI Safety frontpage” categories. Or go through the whole subforum routes, with childboards and stuff like that.
Political polarization is very high in the US. This is a global phenomenom, and in other countries polarization is currently decreasing.
What about weekends? There are currently 104 days in a year where you’re not supposed to work.
The big difference is that these days are uniformly distributed through the year, and aren’t in a one or two week block.
ITT: Millenials lamenting about the decadence of youth :)
On a more serious note, awesome.
I like reading text, prefer transcripts of podcasts to the actual audio (I find it boring, even when sped up), and spend too much time looking at memes on reddit and facebook. Sometimes I yell at clouds.
Somehow, videos are missing from my media consumption. I attribute it to me using 3rd party apps for everything. These apps create a barrier in the endless flow, and I have to choose content more intentionally.
I started watching the videos, and holy shoes, you found the right buttons. If your vids any indication of what’s going on on these platforms, I’ll update towards tiktok being actively harmful for cognition. (Not a critique to you; but the weapons you showed are symmetric and powerful, so it’s possible there’s enemy action there.) I can imagine myself getting addicted to those, I guess I got lucky.
All in all, I think it’s a good project because I believe rational memes are good. If there’s a fentanyl crisis, and you’re selling heroin, then it’s better to have rats on heroin.
Thank you for writing this post, I think this is a useful framing of this problem. For me personally, the doom game is fun, imho I have more motivation to do things and I become more self-confident. (if it ends what worse could happen) But that’s for me, with my socially isolated Math/ComSci/CosHo background.
For others, I don’t think it’s a good game. I kinda noticed the tons of psychotic breakdowns around the field and, like, that’s bad, but I could not have articulated why it was bad.
And even for me, I might kinda overshoot with the whole information hazard share-or-not thinking. It’s better if you’re in charge of the game and not let the doom game play you.
That’s the second filter, because “optimizing” is two words: having a goal and maximising (or minimising) it.
First, one has to aknowledge that solving aligment is a goal. Many people does not recognize that it’s a problem, beacuse smart robots will learn what love means and won’t hurt us.
What you talked about in your post comes after this. When someone is walking towards the goalpost of alignment, they should realize that there might be multiple routes there and they should choose the quickest one, because only winning matters.
Marketers, scammers and trolls are trying-to control the internet bottom-up, joining the ranks of the users and going against internet institutions. While it’s a problem (a possibly big one), a worse situation is when the internet instutions themselves start using LLMs for top-down control. For a fictional example, see heaven banning.
It’s evening, the sun is set. A man walks up to a scholar:
“Scholar, the sun rose yesterday and today morning. Will it rise again tomorrow?”
“Man, I don’t know, it’s kinda dark right now. Have you heard about the no free lunch theorem?”
Have you heard that the medium is the message? It was written before the internet happened, and said that society becomes tv-like or radio-like if it watches a lot of tv or radio.
It is interesting to see how this idea applies to the internet. I agree with you on that we should not handle the internet as a block, because each side has it’s artifacts. I think there should be more explaration of ideas, but on other sites. LW in it’s current form suited for long essay type posts, which I think is good for it’s stated purpose, methodical discussion of ideas.
Thanks for the post, it’s an important update on the state of information warfare.
Privacy can be thought of as a shield. If you build a wall against small-arm spam, then it’s ok, but if you try to build an underground bunker, then it’s weird because only Certified Good Guys have access to advanced weapons. Why are you trying to protect yourself against the Certified Good Guys?
What changed is that thanks to AI advancements in the last few years, it become possible to create homemade heat-seeking infomissiles. Suddenly, there are other arguments for building bunkers.
Hi!
I am a Mathematics university student from Europe. I don’t comment often, and English isn’t my native language, sorry for any mistakes in my tone or my language. I’m reading this site since March, but I heard about this site a long time ago.
I was always interested in computers and AI, so I found LW and Miri in 2015. But I didn’t stay at that time. I think my entry point was when someone (around 2018 maybe?) recommended Unsong on reddit because it was weird and fun. I read a lot of stories on the rational fiction subreddit. (Somehow, I did not read HPMoR. Yet.) This March, there was a national lockdown, I got bored, so I looked up again SSC and this site. Since then, I read a lot of quality essays, for which I’m thankful.
During this winter, I try to participate more in online communities. I am interested in any topic, and I know a lot about mathematics and computers, so I might write something adjacent to these subjects.
Best,
CoafOS