So, hashtag “tell me the Rationalist community is neurodivergent without telling me they are neuro-divergent”?
Ericf
The real answer is that you should minimize the risk that you walk away and leave the door open for hours, and open it zero times whenever possible. The relative heat loss from 1 vs many separate openings is not significantly different from each-other, but it is much more than 0, and the tail risk of “all the food gets warm and spoils” should dominate the decisions
I don’t thunk your model is correct. Opening the fridge causes the accumulated cold air to fall out over a period of a few (maybe 4-7?) seconds, after which it doesn’t really matter how long you leave it open, as the air is all room temp. The stuff will slowly take heat from the room temp air, at a rate of about 1 degree/minute. Once the door is closed, it takes a few minutes (again, IDK how long) to get the air back to 40F, and then however long to extract the heat from the stuff. If you are chosing between “stand there with it open” and “take something out, use it, amd put it back within a few minutes” there is no appreciable difference in the air temp inside the fridge for those two options—in both cases things will return to temp some minutes after the last closing. You can empirically test how long it takes to re-cool the air simply by getting a fridge thermometer and seeing how the temperature varies with different wait times. Or just see how long before the escaping air “feels cold” again.
Re: happiness, it’s that meme graph: Dumb: low expectations, low results, is happy Top: can self-modify expectations to match reality: is happy Muddled middle: takes expectations from environment, can’t achieve them, is unhappy.
The definition of Nash equilibrium is that you assume all other players will stay with thier strategy. If, as in this case, that assumption does not hold then you have (I guess) an “unstable” equilibrium.
The other thing that could happen is silent deviations, where some players aren’t doing “punish any defection from 99”—they are just doing “play 99″ to avoid punishments. The one brave soul doesn’t know how many of each there are, but can find out when they suddenly go for 30.
It’s not. The original Nash construction is that player N picks a strategy that maximizes thier utility, assuming all other players get to know what N picked, and then pick a strategy that maximizes thier own utility given that. Minimax as a goal is only valid for atomic game actions, not complex strategies—Specifically because of this “trap”
There is a more fundamental objection: why would a set of 1s and 0s represent (given periodic repetition in 1⁄3 of the message, so dividing it into groups of 3 makes sense) specifically 3 frequencies of light and not
Sound (hat tip The Hail Mary Project)
An arrangement of points in 3d space
Actually 6 or 9 “bytes” to defie each “point”
Or the absolute intensity or scale of the information (hat tip Monty Python tiny aliens)
I think the key facility of am agent vs a calculator is the capability to create new short term goals and actions. A calculator (or water, or bacteria) can only execute the “programming” that was present when it was created. An agent can generate possible actions based on its environment, including options that might not even have existed when it was created.
I think even these first rough concepts have a distinction between beliefs and values. Even if the values are “hard coded” from the training period and the manual goal entry.
Being able to generate short term goals and execute them, and see if you are getting closer to your long tern goals is basically all any human does. It’s a matter of scale, not kind, between me and a dolphin and AgentGPT.
In summary: Creating an agent was apparently already a solved problem, just missing a robust method of generating ideas/plans that are even vaguely possible.
Star Trek (and other Sci fi) continues to be surprisingly prescient, and “Computer, create an adversary capable of outwitting Data” creating an agen AI is actually completely realistic for 24th century technology.
Our only hopes are:
The accumulated knowledge of humanity is sufficient to create AIs with an equivalent of IQ of 200, but not 2000.
Governments step in and ban things.
Adversarial action keeps things from going pear shaped (winning against nature is much easier than winning against other agents—just ask any physisit who tried to win the stock market)
Chimps still have it pretty good, at least by thier own standards, even though we took over the world.
Consider a fictional king with two advisors. One predicts the next heavy rainfall will occur in 3 weeks time. The second predicts no heavy rain for the next 3 years. If your farmers need to plant crops 1 week before heavy rain, amd you are equally confident in both advisors, what should you do?
This is a classic decision-making problem that involves balancing two conflicting pieces of information. If we assume that the predictions of both advisors are equally reliable, then the best course of action is to take a middle-ground approach that minimizes the risks associated with each prediction.
In this case, one advisor predicts heavy rainfall in 3 weeks, while the other predicts no heavy rain for the next 3 years. To balance these conflicting predictions, the king should consider planting crops in two separate phases.
First, he should plant a small portion of the crops immediately, to ensure that they are in the ground before any potential heavy rain. This will minimize the risk of missing the opportunity to plant crops before the predicted rainfall in 3 weeks.
Next, the king should wait for the predicted rainfall to occur or not occur, as per the advisors’ predictions. If heavy rain does occur in 3 weeks, then the remainder of the crops should be planted immediately after the rain stops. If heavy rain does not occur in 3 weeks, then the remainder of the crops should be planted gradually over the next few months, until the next heavy rainfall is predicted to occur.
By adopting this approach, the king can minimize the risks associated with both predictions, while ensuring that his farmers have the best chance of growing healthy crops.
ChatGPT Feb 13 Version. Free Research Preview. Our goal is to make AI systems more natural and safe to interact with. Your feedback will help us impr
The devil, as they say, is in the details. But worst case scenario is to flip a coin—don’t be Buridan’s Ass and starve to death because you can’t decide which equidistant pile of food to eat.
Making choices between domains in pursuit of abstract goals:
Say I have an agent with the goal of “win $ in online poker” and read/write access to the internet. Obviously that agent will simulate millions of games, and play thousands of hands online to learn more about poker and get better. What I don’t expect to ever see (without explicit coding by a human) is that “win $ at poker” AI looking up instructional youtube videos to learn from human experts, or telling its handlers to set up additional hardware for it, or writing child AI programs with different strategies and having them play against each-other, or trading crypto during a poker game because that is another way to “win $,” or even coding and launching a new poker playing website. I would barely expect it to find new sites where it could play, and be able to join those sites.
Better headline would be “I created a market on whether, in 2 months, I will believe that IQ tests measure what I believe to be intelligence” Not a particularly good market question.
What we saw when the I-15 Corridor was expanded (souther California, from Riverside to San Diego inland) was that over time people were willing to live further away from work, because the commute was “short enough,” but as more people did that it got crowded again. So, total vehicle miles increased, without increasing the number of vehicle trips, since each trip was longer.
Highlighting the point in the Q&A: If you are having fun in HS or College, you don’t need to leave. Put that extra energy that could be going towards graduating early into a side project (learn plumbing, coding, carpentry, auto maintenance, socializing, networking, youtubeing, dating, writing, or anything else that will have long term value regardless of what your career happens to be).
I’m a big fan of “take community College courses, and have them count for HS credits and towards your associates/bachelors” if your HS allows it.
Jave you tried playing with two (or 3 or 4) sides considered “open”—allowing groups to live if they touch those sides (abstracting away a larger board, to teach or practice tactical moves)?
“Baby sign” is just a dozen or so concepts like “more”, “help”, “food”, “cold” etc. The main benefit is that the baby can learn to control thier hands before they learn to control thier vocal chords.
Neurotypicals have weaker preferences regarding textures and other sensory inputs. By and large, they would not write, read, or expect others to be interested in a blow-by-blow of asthetics. Also, at a meta level, the very act of writing down specifics about a thing is not neurotypical. Contrast this post with the equivalent presentation in a mainstream magazine. The same content would be covered via pictures, feeling words, and generalities, with specific products listed in a footnote or caption, if at all. Or consider what your neurotypical friend’s Facebook post about a renovation/new house etc. Emphasize: typically it’s the people, as in “we just bought a house. I love the wide open floor plan, and the big windows looking out over the yard make me so happy” in contrast to “we find that the residents are happier and more productive with 1000W of light instead of the typical 200.”
#don’texplainthejoke