manifold.markets/Sinclair
Sinclair Chen
I suspect this is less accurate at recommending personalized content compared social media algorithms (like tiktok) that consider more data, yet is also not much more transparent than those algorithms.
You could show the actual eigenkarma—but you’d have to accurately convey what that number means, make sure that users don’t think it’s global like reddit/hn, and you can’t show it when logged, in link previews, nor in google search. Compare this to the simplicity of showing global karma—it’s just a number and 2 tiny buttons that can be inline with the text. LW jams two karmas in each comment and it makes sense. The anime search website Anilist lets users vote on category/genre tags and similar shows on each show’s page, and it all fits.
I think “stuff liked by writers who wrote stuff I like” is less accurate than “stuff liked by people who liked content I like”. There are usually much fewer writers than likers.
I think it’s also less transparent than “stuff written by writers I subscribed to”
I’ve tried this. It only occurred to me after I moved to a dirty and ugly neighborhood. Before then I lived either in car centric suburbia or pleasant city neighborhoods with lots of nearby shops.
anyways it’s a good idea
can’t you wipe a hard drive’s data by redacting it?
This reminds me of the AP tests in america. These are tests administered in the by The College Board (same company that runs the SATs) which give college credit for their subject. Many high schools teach AP classes for particular tests, but you could just study for them yourself.
This also reminds me of China’s gaokao—a giant standardized test that all high schoolers take for college placements. There was a large market for after-school tutoring for these tests, before the PRC banned the entire industry. I think Japan and Taiwan have similar systems.
Decoupling testing from teaching is just commonsense incentive design. It has been tried before and it works.
It’s not called an “exam-only-university” because it gives out tests once a quarter out of rented facilities and has no campus, no dorms, no frats, and no clubs.
Manifold Markets community meetup
I’m trying to see if pol.is would be good for this, like so: https://pol.is/4fdjudd23d
pol.is is a tool for aggregating opinions on political subjects from among a lot of people—it takes agree/disagree votes, clusters opinions based on similarity of voting, and ultimately tries to find consensus opinions. It was used in Taiwan to help write ride-share legislation.
I’m hoping I can misuse it here for operationalizing prediction market questions. If the “manifold users” like to bet on understandable questions, the “forecasters” like to bet on precise questions, while the “researcher” likes questions about day-to-day work, then perhaps by getting enough people from each “party” to weigh in it will find “consensus” questions that they are simultaneously useful, precise, and popular (and therefore more accurate).
I am unsure if pol.is will actually work better at the 10-100 people level compared to a normal forum. Let’s give it a try anyways!
we update our classes very frequently since this we use tailwind and iterate on the styles all the time
I agree, but it’s literally illegal to have real money prediction markets in the US on anything but finance and maybe elections. The only realistic paths are getting it legalized, building an actually nice to use and not scammy crypto prediction market, or accepting legal risk like you’re early Uber
I will practice more empathy in my daily conversations.
I recommend you focus instead on getting people to laugh as your key metric. It’s much easier to tell whether you are doing well or poorly or merely ok, and succeeding at it gets you very far.
Maybe we should just let people bet on N/A similar to Augur (with some stronger norm of resolving N/A in ambiguous cases)
I think Metaculus’s level of verbosity in resolution criteria is bad in that it makes questions longer to write and longer to understand (because it takes longer to read and because its more complex). Part of the goal of Manifold is to remove trivial inconveniences so that people actually forecast at all, and so that we get markets on literally everything.
I think the synthesis here is to have a subset of high quality markets (clear resolution criteria, clear meta-resolution norms) but still have a fat tail of medium-quality questions.
(Engineer at Manifold here.) I largely agree! Letting people make markets on anything means that many people will make poorly operationalized markets. Subjective resolution is good for bets on personal events among friends, which is an important use case, but it bad for questions with an audience bigger than that.
We need to do a better job of:
1. resolution reliability, like by adding a reputation system or letting creators delegate resolution to more objective individuals / courts.
2. helping users turn their vague uncertainties into objective questions—crucial but less straightforward.
3. surfacing higher quality content
I’ve changed the betting interface now to be easier to use, albeit with less information
Prediction markets meetup/coworking (hosted by Manifold Markets)
Inositol indeed.
I don’t know anyone else that’s tried this. I’d only bet 55-65% that it works for any given person. But it’s available over the counter and quite safe.
I should probably get around to setting up a more rigorous experiment one of these days...
Is this supposed to say ‘overestimate’?
Yes, corrected.what info from the paper is the claim based on?
I don’t remember (I copied the points from my notes from months ago when I did the research).
I’ve also concluded that in-love epistemics are terrible from my own research. For instance, in this n=71 study where college students write about a time they’ve rejected a romantic confession and a time they were rejected:
- suitors report that rejectors are mysterious, but rejectors do not report being mysterious
- suitors severely overestimate probability of being liked back
- suitors report that rejections are very unclear, and while suitors report the same, it is to a much lesser degree/frequency.
I’ve also been overconfident of compatibility and of mutual affection in my own life (n=1)
However, I think there’s something to be said for having something (someone?) to protect.
Eliezer mentions in Inadequate Equillibria using extremely bright lights to solve his partner’s Seasonal Affective Disorder—which was not medical consensus, and only after Eliezer’s experiment are more “official” trials for this intervention being tested.
Or in my case, I was so heartbroken over a bad ex that I researched romance science, learned about the similarity between limerance and OCD, and tried a supplement that cured my heartbreak. I wouldn’t normally try new drugs nor browse google scholar, but I was really motivated.
For anyone else who wants to bet on this, here’s a market on manifold:
> the only punishments possible are a frown or a hand grenade
This is similar to the ultimatum game. Which implies that absent social coordination, a personal solution is for the victim to fine the the medium-transgressor a certain amount in damages, under threat of some probability of cancelling them, with a probability chosen such that the transgressor would be better off just paying the fine.
my apartment-mate started reading hpmor today!
she actually couldn’t find it at first because the lw hosting has bad seo, ranked below some ssc post about My Immortal. she had to ask me the name of the fic.
I was surprised to see it here rather than the original site