Increasing IQ by 10 Points is Possible

Link post

A while ago I wrote how I managed to add 13 points to my IQ (as measured by the mean between 4 different tests).

I had 3 “self-experimenters” follow my instructions in San Francisco. One of them dropped off, since, surprise surprise, the intervention is hard.

The other two had an increase of 11 and 10 points in IQ respectively (using the “fluid” components of each test) and an increase of 9 and 7 respectively if we include verbal IQ.

A total of 7 people acted as a control and were given advantages on the test compared to the intervention group to exacerbate the effects of memory and motivation, only 1 scored on par with the intervention group. We get a very good p-value, considering the small n, both when comparing the % change in control vs intervention (0.04) and the before/​after intervention values (0.006)

Working Hypothesis

My working hypothesis for this was simple:

If I can increase blood flow to the brain in a safe way (e.g. via specific exercises, specific supplements, and photostimulation in the NUV and NIR range)

And I can make people think “out of the box” (e.g. via specific games, specific “supplements”, specific meditations)

And prod people to think about how they can improve in whatever areas they want (e.g. via journaling, talking, and meditating)

Then you get this amazing cocktail of spare cognitive capacity suddenly getting used.

As per the last article, I can’t exactly have a step-by-step guide for how to do this, given that a lot of this is quite specific. I was rather lucky that 2 of my subjects were very athletic and “got it” quite fast in terms of the exercises they had to be doing.

The Rub

At this point, I’m confident all the “common sense” distillation on what people were experimenting with has been done, and the intervention takes quite a while.

Dedicating 4 hours a day to something for 2 weeks is one thing, but given that we’re engaging in a form of training for the mind, the participants need not only be present, but actively engaged.

A core component of my approach is the idea that people can (often non-conceptually) reason through their shortcomings if given enough spare capacity, and reach a more holistic form of thinking.

I’m hardly the first to propose or observe this, though I do want to think my approach is more well-proven, entirely secular, and faster. Still, the main bottleneck remains convincing people to spend the time on it.

What’s next

My goal when I started thinking about this was to prove to myself that the brain and the mind are more malleable than we think, that relatively silly and easy things, to the tune of:

A few supplements and 3-4 hours of effort a day for 2 weeks, can change things that degrade with aging and are taken as impossible to reverse

Over the last two months, I became quite convinced there is something here… I don’t quite understand its shape yet, but I want to pursue it.

At present, I am considering putting together a team of specialists (which is to say neuroscientists and “bodyworkers”), refining this intervention with them, and selling it to people as a 2-week retreat.

But there’s also a bunch of cool hardware that’s coming out of doing this

As well as a much better understanding of the way some drugs and supplements work… and understanding I could package together with the insanely long test-and-iterate decision tree to use these substances optimally (more on this soon).


There was some discussion and interested expressed by the Lighthaven team in the previous comment section to replicate, and now that I have data from more people I hope that follows through, I’d be high-quality data from a trustworthy first party, and I’m well aware at this point this should still hit the “quack” meter for most people.

I’m also independently looking for:

  1. People to help me get better psychometrics, the variance in my dataset is huge and my tests stop working at 3 STDs of IQ, for the most part. I’d love to have one or two more comprehensive tests that are sensitive to analysesup to 5 STDs

  2. People to run independent analysis on the data, in whatever way they see fit. If you are a professor or otherwise system-recognized expert in the area this would be especially useful. I think the analysis here is quite trivial and “just look at the numbers” is sufficient, but having external validation also helps.

For now, I’m pretty happy to explain to anyone who wants to do this intervention themselves what it involved for me (for free, I want the data), my disclaimers are as follows:

I am not a doctor, and anything that I suggest might be unsafe, you do at your own risk, I guarantee neither the results nor the safety profile for what I did.

I prefer to work with groups of between 2 or 3 people.

I can’t be physically present to help you, but we can have a Zoom call every couple of days.

I expect you to bring 3 to 5 controls along for the ride, without them the data is much weaker, the more similar the controls are to you (in terms of environment and genetics) the better.

My current approach involves dedicating at least 3 to 4 hours of your day to this, wholeheartedly: in a way that’s consistent, involved, and enthusiastic.

The specialists you’ll need to hire and the hardware you’ll need to buy might well drive you past the 10k point (for a group of 3 people) if you do this properly, and you might need a week of scouting to find the right people to work with you.

That being said, since a lot of people were excited to follow through with this last time, I am now putting this offer out there.

.

.

.

Confounder elimination

There are a few confounders in a self-experiment like this:

You are just taking people who are not supplementing or eating properly and you are making them use common-sense meals/​supplements

You are taking people who don’t exercise and making them exercise because exercise is magic this will result in a positive change but is boring (because exercise is hard)

You are doing a tradeoff to increase performance on the IQ test (e.g. giving them caffeine and or Adderall)

You are not taking into account memorization happening on the IQ tests

The subjects are “more motivated” to perform when redoing the tests

I have addressed all of these:

The subjects kept the same diet and the same supplement stack they used before, I only added 6 things on top. They are both pretty high up the food chain of supplement optimization, one ran 2 healthcare companies and worked with half a dozen — the other one is his partner

The subjects are both semi-professional athletes, exercising for > 2 hrs a day, able to run marathons and ironmans

The subjects’ HR and BP were monitored and no changes happened, no supplements whatsoever were taken > 24hrs before re-taking the IQ tests

I had controls, and 2 of my controls took the tests 24 hours apart, to “maximize” memorization effects

I had controls that were being paid sums between 40 and 100$ (adjusted to be ~2x their hourly pay rate) for every point of IQ gained upon retaking the tests


So how do the numbers look after I control them?

Intervention mean increases: (11.2 [9%], 9.6 [8%], 12.6 [10%]) (mean of means: 11.1) - Average increase: 9.3%
Control mean increase: (14.2 [12%], 4.4 [3%], 8.8 [7%], 7.6 [6%], 5.2 [4%], 5.6 [5%], 3.2 [2%]) (mean of means: 7.0) - Average increase: 5.9%

Controlled mean increase: 4.1
    
Related T-test between the before/after means for the intervention: -12.846 (p=0.006)
Related T-test between the before/after means for the control: -5.015 (p=0.002)
Independent T-test between the before/after difference between intervention and control: -2.46 (p=0.04)

I’d say pretty damn nice given that the controls are going above and beyond in taking the tests under better conditions and with more incentives than the intervention. I am testing a “worse case” scenario here and even in a worst-case scenario 13 of the finding holds.


My speculation is that most of the control data is just memorization or incentives. For one the variance between controls is huge (And the p values reflect this).

For seconds, let’s look at verbal IQ:

Intervention mean increases: (0.0 [0%], 5.0 [4%], -16.0 [-14%]) (mean of means: -3.7) - Average increase: -3.4%
Control mean increase: (18.0 [16%], 25.0 [25%], 14.0 [13%], 13.0 [10%], 2.0 [1%], 10.0 [8%], -5.0 [-4%]) (mean of means: 11.0) - Average increase: 10.2%

Controlled mean increase: -14.7
    
Related T-test between the before/after means for the intervention: 0.579 (p=0.621)
Related T-test between the before/after means for the control: -2.92 (p=0.027)
Independent T-test between the before/after difference between intervention and control: 2.032 (p=0.115)

So the fluid component has a +4.1 diff, and the verbal component (which we expect to be stable) has a −14.7 diff. That to me indicates the controls are “trying harder” or “memorizing better” in a way that the intervention group isn’t.


Overall this doesn’t matter, the finding is significant and of an unexpected magnitude either way.

But I do feel like it’s important to stress that I am controlling for the worst-case scenario, and still getting an unambiguously positive result. This approach is not typical in science, where the control and intervention are equally matched, as opposed for the control being optimized to eliminate any and all potential confounders.