There’s a nice conventional categorisation of behaviour modification programmes that goes like this:
Fixed-ratio: a reward is given after a fixed number of nonreinforced responses (e.g. an M&M after every pomodoro, or even fifth pomodoro). Fixed-interval: a reward is given after a fixed interval of time (e.g. you might always set the pomodoro for 25 minutes as per convention). Variable-ratio: a reward is given after a variable number of nonreinforced responses (e.g. you flip a coin after every pomodoro to decide whether you get an M&M). Variable-interval: a reward is given after a variable time interval (i.e. you find some way to determine how long to set the pomodoro, perhaps with a lower bound).
The schedule of reinforcement you’re using is left a bit vague. It looks like you’re following an FR schedule but could also be doing an FI or VI schedule. But for the purposes of offering advice to people who might want to try something like this, I’ll assume you’re using either FR or FI.
Psychologists categorise schedules in that way because they want to study the effects of differences in reinforcement. In particular they’ve been interested in the effects of changes in schedules on the extinction of a behaviour. One major result from the literature (which is reported in most psych textbooks that include a chapter on learning theories) is that variable schedules (using either ratios of respondes or time intervals) are much more resistant to extinction than fixed schedules. As an example, consider a slot machine at a casino; it doesn’t have a fixed ratio of 1 reward for every nth try. Instead it varies the ratio of attempts and rewarded attempts, taking advantage of the much stronger reinforcement effect.
So my first piece of advice is: do not use fixed schedules. Varying the rate of reinforcement (either as a function of time or number of completed pomodoros) will help make the good habit you’re trying to build stick if your pomodoros use is ever disrupted (because you’re busy, you somehow forget, or whatever).
Another result from the literature is that ratio schedules produce higher response rates. This occurs because faster responding increases the likelihood of being reward sooner, since ratio schedules don’t depend on time but on attempts. In many situations you might want to take advantage of this and opt for a VR schedule (say if you wanted to encourage a child to behave). In this case, though, it would probably only lead to extinction or abuses. Extinction because if your time intervals are somewhat long (say around 30-60 minutes), then the rewards might be given too infrequently to build your motivation and give you energy. Abuses because the big spaces between rewards might encourage you to cheat the system and eat some M&Ms anyhow because you want the energy.
That leads me to my second bit of advice: don’t use a VR schedule; instead vary the time interval. I suggest finding some way to randomise the selection (like rolling dice, throwing darts, or having an algorithm spit out a number) and putting a lower bound on the time intervals (to give yourself enough time to build some flow and focus).
All done.