N-of-1 Blinded Experiment: Will 500mg of Inositol improve my energy levels?
20
245
610
resolved Jan 25
Resolved
NO

I decided to try this experiment after reading this blog post by @ElizabethVanNostrand, which suggests that people with vague digestive and mood issues may benefit from supplementing inositol.

I will be undertaking a blinded trial with inositol using the following protocol:

1. Take 500mg inositol or a placebo (blinded) before starting work, only on regular work days.

2. At the end of my work day, give a subjective measurement of my energy levels throughout the day between 0-10.

3. Repeat for 20 work days.

(Protocol taken from here, which has sources: https://n1.tools/experiments/energy/inositol)

Resolution criteria: Resolves YES if a naive model (i.e. that doesn't include confounders etc.) suggests at least an 80% probability that consuming inositol increases my energy levels.

In other words: "Compared to placebo, a difference in energy between 0 and xx is at least 80% likely".

For reference, my previous market on whether L-Tyrosine improves focus used 20 data points and resulted in mean focus (placebo) of 7.4 and mean focus (L-Tyrosine) of 7.0. The model suggested that "Compared to placebo, a difference in focus between 0% and 30% is 19.4% likely". Image from that experiment below.

Extra notes:
- Starting this experiment today.

- I've been to a doctor twice for weird pains in my gut, both times they said I'm fine. I feel like I'm also quite lethargic day-to-day, but I'm not sure how that compares to the general population.
- I won't bet on this market.
- Doing this on work days only so the environment etc is consistent.


----------------------------------
Model details (for those interested)

I'll model the posterior distribution of energy values with the placebo and with inositol, and then merge those to create a posterior distribution for the absolute difference in energy between the two. I'll be using a Bayesian model for this, which requires a prior to update. That prior will be the mean anxiety for the placebo and inositol from the data itself (which is informative, and means the data will basically define the posterior distribution).

Priors:

Mean Inositol ~ Normal(mean_data, std_data)

Mean Placebo ~ Normal(mean_data, std_data)

StdDev Inositol ~ HalfNormal(5)

StdDev Placebo ~ HalfNormal(5)

Likelihoods:

Data Inositol ~ Normal(Mean Inositol, StdDev Inositol)

Data Placebo ~ Normal(Mean Placebo, StdDev Placebo)

Deterministic Transformation for Absolute Difference:

Absolute Difference = Mean Inositol - Mean Placebo

Posterior Inference:

Probability(Absolute Difference ≥ 0) ≥ 0.80

Get Ṁ200 play money

🏅 Top traders

#NameTotal profit
1Ṁ53
2Ṁ49
3Ṁ37
4Ṁ18
5Ṁ14
Sort by:

Perhaps a slight increase in energy levels, but not enough to resolve this market "YES" (and I didn't really notice much difference).

Currently adding an Oura ring integration to n1.tools, hope to do an experiment on whether melatonin decreases my time taken to fall asleep (using Oura's latency metric) next!

I missed some days over the holidays, so extending the market by a week.

bought Ṁ100 of NO

20 days is not long enough for much power.

I ran a simulation with a frequentist t-test, to calculate the probability of rejecting the null at a significance level of 0.2. For a Cohen's d of 0.25 (small), the probability of rejecting the null is 0.25.

import numpy as np
from scipy.stats import ttest_ind

# Parameters
n_per_group = 10  # number of samples per treatment
total_samples = 20  # total samples
std_dev = 2  # standard deviation
true_diff = 1.6  # true difference in means
alpha = 0.2  # significance level
n_simulations = 10000  # number of simulations
# Adjusted true difference in means for the new simulation
true_diff_new_adjusted = 0.5  # new true difference in means is 0.5

# Resetting the counter for null hypothesis rejections
reject_null_count_new_adjusted_true_diff = 0

# Rerunning the simulation with the new adjusted true difference in means
for _ in range(n_simulations):
    # Generate samples for each group
    group1 = np.random.normal(0, std_dev, n_per_group)
    group2 = np.random.normal(true_diff_new_adjusted, std_dev, n_per_group)

    # Perform two-sample t-test
    t_stat, p_value = ttest_ind(group1, group2)

    # Check if null hypothesis is rejected
    if p_value < alpha:
        reject_null_count_new_adjusted_true_diff += 1

# Probability of rejecting the null hypothesis with the new adjusted true difference
prob_reject_null_new_adjusted_true_diff = reject_null_count_new_adjusted_true_diff / n_simulations
prob_reject_null_new_adjusted_true_diff

@XTXinverseXTY Hey, yeah that's a totally valid concern. N-of-1 studies can end up being inherently underpowered since you're limited by time (how long you can continue the experiment) rather than (for example) increasing the number of study participants to get sufficient power.

Since ultimately I'm just making a decision on whether or not to continue taking the supplement, and I'm happy with just working with probabilities rather making sure the experiment rejects the null hypothesis,  I'm taking the approach outlined in model details. I'll resolve the market YES if there's an 80% probability (by the naive model) that there's any increase in my energy levels due to the supplement after 20 observations. I'll probably use a similar criterion to determine whether I keep/stop taking inositol, or decide that I want to get more data.

@XTXinverseXTY (Separately, simulating the t-test for a given effect size was appreciated!)

predicted NO

@LuisCostigan

I actually don’t mean to criticize your design! In the spirit of making play money on Manifold, I just ran this simulation as a shorthand to figure out what the market price ought to be, given what I thought was a prior belief about the effect size. Then I shared my thesis to pull the consensus towards my belief.

I’ve expressed a similar sentiment in a Reddit thread, but I’m a huge fan of this project! I feel very strongly that barriers and inconveniences to performing N=1 experiments ought to be minimal. The Bayesian test design would even allow experimenters to pool information from past experiments, and achieve greater power.

@XTXinverseXTY Makes sense! Thanks for the comments and encouragement, I'll keep pushing on :)