CK, as used here, seems more transactional and situation specific. Emotional Labor is usually referring to a pattern over time, including things like checking for unknown unknowns, and “making sure X gets done” Both ideas are playing in similar space.
Ericf
Bonus points in a dating context: by being specific and authentic you drive away people who won’t be compatible. In the egg example, even if the second party knows nothing about the topic, they can continue the conversation with “I can barely boil water, so I always take a frozen meal in to work” or “I don’t like eggs, but I keep pb&j at my desk” or just swipe left and move on to the next match.
Follow up question: is this a permanent gain or temporary optimization (eg without further intervention, what scores would the subject get in 6 months?)
We know for sure that eating well and getting a good night’s sleep dramatically improves performance on a wide array of mental tasks. It’s not a stretch to think other interventions could boost short term performance even higher.
For further study: Did the observed increase represent a repeatable gain, or an optimization? Within-subject studies show a full SD variation between test sessions for many subjects, so I would predict that “a set of interventions” could produce a “best possible score” for an individual but hit rapid diminishing returns.
Communication bandwidth: if you find that you’re struggling to understand what the person is saying or get on the same page as them, this is a bad sign about your ability to discuss nuanced topics in the future if you work together.
Just pulling this quote out to highlight the most critical bit. Everything else is about distinguishing between BS and ability to remember, understand, and communicate details of an event (note: this is a skill not often found at the 100 IQ level). That second thing isn’t necessarily a job requirement for all positions (eg sales, entry level positions), but being comfortable talking with your direct reports is always critical.
The described “next image” bot doesn’t have goals like that, though. Can you take the pre-trained bot and give it a drive to “make houses” and have it do that? When all the local wood is used up, will it know to move elsewhere, or plant trees?
If you have to give it a task, is it really an agent? Is there some other word for “system that comes up with its own tasks to do”?
Note that you have reduced the raw quantity of dust specks by “a lot” with that framing. Heat death of universe is in “only” 10^106 years, so that would be no more than 2^ (10^(106)) people (if we somehow double every year) compared to 3||3^(27), which is 3^ (10^ (a number too big to write down))
200 years ago was 1824. So compared to buying land or company stocks (the London and NY stock exchanges were well established by then) or government bonds.
Narrator: gold has been a poor bet for 90% of the last 200 years.
(Don’t quote me on that, but it is true that gold was a good bet for about 10 years in recent memory, and a bad bet for most post-industrial time)
I can’t tie up cash in any sort of escrow, but I’d take that bet on a handshake.
Mr. Pero got fewer votes than either major party candidate. Not a ringing endorsement. And I didn’t say the chances were quite low, I said they were zero*. Which is at least 5 orders of magnitude difference from “quite low” so I don’t think we agree about his chances.
*technically odds can’t be zero, but I consider anything less likely than “we are in a simulation that is subject to intervention from outside” to be zero for all decision making purposes.
There is an actual 0% chance that anyone other than the Democratic or Republican nominee (or thier replacement in the event of death etc.) becomes president. Voting for/supporting any other candidate has, historically, done nothing to support that candidate’s platform in the short or long term. If you find both options without merit, you should vote for your preferred enemy:
Who will be most receptive to your message, either in a compromise, or argument And/or
So sorry about your number 1 issue, neither party cares. What’s your number 2 issue, maybe there is a difference there?
Do you have a link to the study validating that the LLM responses actually match the responses given by humans in that category?
Note one weakness of this technique. An LLM is going to provide what the average generic written account would be. But messages are intended for a specific audience, sometimes a specific person, and that audience is never”generic median internet writer.” Beware WIERDness. And note that visual/audio cues are 50-90% of communication, and 0% of LLM experience.
How does buying “none of the above” work as you add more entries? If someone buys NOTA today, and the winning entry is #13, does everyone who bought NOTA before it was posted also win?
Agree that closer to reality would be one advisor, who has a secret goal, and player A just has to muddle through against an equal skill bot with deciding how much advice to take. And playing like 10 games in a row, so the EV of 5 wins can be accurately evaluated against.
Plausible goals to decide randomly between:
Player wins
Player loses
Game is a draw
Player loses thier Queen (ie opponent still has thier queen after all immediate trades and forcing moves are completed)
Player loses on time
Player wins, delivering checkmate with a bishop or knight move
Maximum number of promotions (for both sides combined)
Player wins after having a board with only pawns Etc...
Arguing against A doesn’t support Not A, but arguing against Not Not A is arguing against A (while still not arguing in favor of Not A) - albeit less strongly than arguing against A directly. No back translation is needed, because arguments are made up of actual facts and logic chains. We abstract it to “not A” but even in pure Mathematics, there is some “thing” that is actually being argued (eg, my grass example).
Arguing at a meta level can be thought of as putting the object level debate on hold and starting a new debate about the rules that do/should govern the object level domain.
Alice: grass is green → grass isn’t not green Bob: the grass is teal → the grass is provably teal Alice: your spectrometer is miscalibrated → your spectrometer isn’t not miscalibrated.
...
I’m having trouble with the statement {...and has some argument against C’}. The point of the double negative translation is that any argument against not not A is necessarily an argument against A (even though some arguments against A would not apply to not not A). And the same applies to the other translation—Alice is steelmanning Bob’s argument, so there shouldn’t be any drift of topic.
Por que no los dos? It’s a minority of people who have the ability and inclination to learn how to conform to a different mileu than thier natural state.