How much should we rewatch our favorite movies (media) vs keep trying new movies? Most spend most viewing time on new movies, which is unlikely to be good. I suggest an explicit Bayesian model of imprecise ratings + enjoyment recovering over time for Thompson sampling over movie watch choices.
When you decide to watch a movie, it can be tough to pick. Do you pick a new movie or a classic you watched before & liked? If the former, how do you pick from all the thousands of plausible unwatched candidate movies? Since we forget, if the former, how soon is too soon to rewatch? And, if we forget, doesn’t that imply that there is, for each individual, a ‘perpetual library’—a sufficiently large but finite number of items such that one has forgotten the first item by the time one reaches the last item, and can begin again?
I tend to default to a new movie, reasoning that I might really like it and discover a new classic to add to my library. Once in a while, I rewatch some movie I really liked, and I like it almost as much as the first time, and I think to myself, “why did I wait 15 years to rewatch this, why didn’t I watch this last week instead of movie X which was mediocre, or Y before that which was crap? I’d forgotten most of the details, and it wasn’t boring at all! I should rewatch movies more often.” (Then of course I don’t because I think “I should watch Z to see if I like it…”) Maybe many other people do this too, judging from how often I see people mentioning watching a new movie and how rare it is for someone to mention rewatching a movie; it seems like people predominantly (maybe 80%+ of the time) watch new movies rather than rewatch a favorite. (Some, like Pauline Kael, refuse to ever rewatch movies, and people who rewatch a film more than 2 or 3 times come off as eccentric or true fans.) In other areas of media, we do seem to balance exploration and exploitation more  people often reread a favorite novel like a Harry Potter novel and everyone relistens their favorite music countless times (perhaps too many times)  so perhaps there is something about movies & TV series which biases us away from rewatches which we ought to counteract with a more mindful approach to our choices. In general, I’m not confident I come near the optimal balance, whether it be exploring movies or music or anime or tea.
The tricky thing is that each watch of a movie decreases the value of another watch (diminishing marginal value), but in a timedependent way: 1 day is usually much too short and the value may even be negative, but 1 decade may be too long  the movie’s entertainment value ‘recovers’ slowly and smoothly over time, like an exponential curve.
This sounds like a classic reinforcement learning (RL) explorationexploitation tradeoff problem: we don’t want to watch only new movies, because the average new movie is mediocre, but if we watch only knowngood movies, then we miss out on all the good movies we haven’t seen and fatigue may make watching the knowngood ones downright unpleasant.
In the language of optimal foraging theory (see ch4 of Foraging Theory, 1986), we face a sequentiallydependent sampling patch problem  where the payoff of each patch can be estimated only by sampling each patch (before letting it recover) and where our choices will affect future choices; the usual marginal value theorem is of little help because we exhaust a ‘patch’ (each movie) before we know how we like it (as we can safely assume that no movie is so good that rewatching it twice in a row is superior to watching all other possible movies), and even if we could know, the marginal value theorem is known to overexploit in situations of uncertainty because it ignores the fact that we are buying information for future decisions and not myopically greedily maximizing the next timestep’s return. Unfortunately, this is one of the hardest and thus least studied foraging problems, and 1986 provides no easy answers (other than to note the applicability of POMDPsolving methods using DP which is, however, usually infeasible).
One could imagine some simple heuristics, such as setting a cutoff for ‘good’ movies and then alternate between watching whatever new movie sounds the best (and adding it to the good list if it is better than the cutoff) and watching the oldest unwatched good movie. This seems suboptimal because in a typical RL problem, exploration will decrease over time as most of the good decisions become known and it becomes more important to benefit from them than to keep trying new options, hoping to find better ones; one might explore using 100% of one’s decisions at the beginning but steadily decrease the exploration rate down to a fraction of a percent towards the end  in few problems is it optimal to keep eternally exploring on, say, 80% of one’s decisions. Eternally exploring on the majority of decisions would only make sense in an extremely unstable environment where the best decision constantly rapidly changes; this, however, doesn’t seem like the moviewatching problem, where typically if one really enjoyed a movie 1 year ago, one will almost always enjoy it now too. At the extreme, one might explore a negligible amount: if someone has accumulated a library of, say, 5000 great movies they enjoy, and they watch one movie every other night, then it would take them 27 years to cycle through their library once, and of course, after 27 years and 4999 other engrossing movies, they will have forgotten almost everything about the first movie…
Better RL algorithms exist, assuming one has a good model of the problem/environment, such as Thompson sampling. This minimizes our regret in the long run, by estimating the probability of being able to find an improvement, and decreasing its exploration as the probability of improvements decreases because the data increasingly nails down the shape of the recovery curve, the true ratings of top movies, and enough top movies have been accumulated The real question is the modelling of ratings over time.
The basic framework here is a longitudinal growth model. Movies are ‘individuals’ who are measured at various times on ratings variables (our personal rating, and perhaps additional ratings from sources like IMDB) and are impacted by events (viewings), and we would like to infer the posterior distributions for each movie of a hypothetical event today (to decide what to watch); movies which have been watched already can be predicted quite precisely based on their rating + recovery curve, but new movies are highly uncertain (and not affected by a recovery curve yet). I would start here with movie ratings. A movie gets rated 110, and we want to maximize the sum of ratings over time; we can’t do this simply by picking the highestever rated movie, because once we watch it, it suddenly stops being so enjoyable; so we need to model some sort of drop. A simple parametric model would to treat it as something like an exponential curve over time: gradually increasing and approaching the original rating but never reaching it (the magic of the first viewing can never be recaptured). (Why an exponential exactly, instead of a spline or something else? Well, there could be a hyperbolic aspect to the recovery where over the first few hours/days/weeks enjoyment resets faster than later on; but if the recovery curve is monotonic and smooth, then an exponential is going to fit it pretty well regardless of the exact shape of the spline or hyperbola, and one would probably require data from hundreds of people or rewatches to fit a more complex curve which can outpredict an exponential. Indeed, to the extent that enjoyment rests on memory, we might further predict that the recovery curve would be the inverse of the forgetting curve, and our movie selection problem becomes, in part, “antispaced repetition”  selecting datapoints to review to maximize forgetting.) So each viewing might drop the rating by a certain number v and then the exponential curve increases by r units per day  intuitively, I would say that on a 10point scale, a viewing drops an immediate rewatch by at least 2 points, and then it takes ~5 years to almost fully recover within ±0.10 points (I would guess it takes less than 5 years to recover rather than more, so this estimate would bias towards new movies/exploration), so we would initially assign priors centered on v = 2 and r= (20.10) / (365*5) ~= 0.001
We could consider one simple model: movies have intrinsic ratings 010, are uniformly distributed, there is an infinite number of them, each time period one earns the rating of a selected movie, ratings are unknown until first consumption, ratings do not deplete or otherwise change, and the goal is to maximize reward. (This simplifies the problem by avoiding any questions about uncertainty or posterior updating or how much movies decrease in enjoyment based on rewatches at varying time intervals.) The optimal strategy is the simple greedy one: sample movies without replacement until one hits a movie rated at the ceiling of 10, and then select that movie in every time period thereafter. Since the reward is maximal and unchanging, there is never any reason to explore after finding a single perfect 10, so the optimal strategy is to find one as fast as possible, which reduces to pure exploration and then infinite exploitation.
How about a version where movie ratings are normally distributed, say 𝑁(5,2.5), with no upper or lower bounds? This is more interesting, because the normal distribution is unbounded and so there will always be a chance to find a higher rated movie which will earn a slightly greater reward in future time periods; even if one hits upon a 10 (+2SD) after sampling ~44 movies, there will still be a 2% chance of hitting a >10 movie on the next sample  this is an order statistics question and the maximum of a sample of n normals follows a roughly logarithmic curve, with the probability of a new sample being the maximum always falling but never reaching zero (it is simply P = 1⁄n). Regret is problematic for the same reason, as strictly speaking, regret is unboundedly large for all algorithms since there is an arbitrarily larger rating somewhere in the tail. A pure strategy of always exploring performs badly because it receives merely an average reward of 5; it will find the most extreme movies but by definition it never makes use of the knowledge. A pure strategy of exploiting the best known movie after a fixed number of exploratory samples n performs badly because it means sticking with, say, a 10 movie while a more adventurous strategy eventually finds 11 or 13 or 20 rated movies etc; no matter how big n is, there is another strategy which explores for n+1 samples and gets a slightly higher maximum & pays for the extra exploration cost. A mixed 𝛜greedy strategy of exploring a fixed percentage of the time performs better since it will at least continue exploration indefinitely and gradually discover more and more extreme movies, but the insensitivity to n is odd  why explore the same amount regardless of whether the P of a new maximum is 1⁄10 or 1⁄10,000? So decreasing the exploration rate as some function of time, or P in this case, is probably optimal in some sense, like in a standard multiarmed bandit problem.
This problem can be made better defined and more realistic by setting a time limit/horizon, analogous to a human lifetime, and defining the goal as being to maximize the cumulative reward by the end; optimal behavior then leads to exploring heavily early on and decreasing exploration to zero by the horizon.
x(t) = 1+1 ^ (t/r) x(365*5) = 0.10
and then our model should finetune those rough estimates based on the data.

not standard SEM latent growth curve model  varying measurement times

not Hidden Markov  categorical, stateless

not simple Kalman filter, equivalent to AR(1)

statespace model of some sort  dynamic linear model? AR(2)?
dlm
, TMB, Biips?
“State Space Models in R” https://arxiv.org/abs/1412.3779 https://en.wikipedia.org/wiki/Radioactive_decay#Halflife https://en.wikipedia.org/wiki/Kalman_filter
The forgetting curve is supposed to increase subsequent memory strength when the memory is reviewed/renewed on the cusp of forgetting. But you don’t want to actually forget the item. Does this imply that antispacedrepetition is extremely simple as you simply need to estimate how long until it’s probably forgotten, and you don’t need to track any history or expand the repetition interval because the memory doesn’t get stronger?
Is there a simple stochastic exploitation strategy like this?

MEMORIZE (a randomized spacedrepetition review algorithm derived using principles from control theory)
is rewatching far less harmful than I think? “Enjoy It Again: Repeat Experiences are Less Repetitive Than People Think”, et al 2019
https://www.newyorker.com/magazine/2018/10/08/thecomfortingfictionsofdementiacare
Some years ago, a company in Boston began marketing Simulated Presence Therapy, which involved making a prerecorded audiotape to simulate one side of a phone conversation. A relative or someone close to the patient would put together an “asset inventory” of the patient’s cherished memories, anecdotes, and subjects of special interest; a chatty script was developed from the inventory, and a tape was recorded according to the script, with pauses every now and then to allow time for replies. When the tape was ready, the patient was given headphones to listen to it and told that they were talking to the person over the phone. Because patients’ memories were short, they could listen to the same tape over and over, even daily, and find it newly comforting each time. There was a séancelike quality to these sessions: they were designed to simulate the presence of someone who was merely not there, but they could, in principle, continue even after that person was dead.

“The universal decay of collective memory and attention”, et al 2019 ( fads in general)

a simple demo of the 5k films example ignoring uncertainty to see what the exploit pattern looks like
T < 30000
dfMovies < data.frame(ID=integer(T/10), MaxRating=numeric(T/10), CurrentRating=numeric(T/10), T.since.watch=integer(T/10))
dfWatched < data.frame(ID=integer(T), Total.unique=integer(T), New=logical(T), CurrentRating=numeric(T), Reward=numeric(T))
currentReward < 0
cumulativeReward < 0
lastWatched < NA
priorDistribution < function() { min(10, rnorm(1, mean=7.03, sd=2)) } # based on my MAL ratings
logistic < function(max,t) { t<t/365; max * (1 / (1 + exp(1 * (t(0))))) }
## Imagine each movie is cut in half, and then recovers over ~5 years
# plot(logistic(10, 1:(365.25*5)), xlab="Days", ylab="Rating", main="Recovery of movie after initial watch (~full recovery in 5y)")
for (t in 1:T) {
dfMovies$T.since.watch < dfMovies$T.since.watch+1
dfMovies$CurrentRating < logistic(dfMovies$MaxRating, dfMovies$T.since.watch)
posteriorSample < priorDistribution()
threshold < max(dfMovies$CurrentRating, 0)
if (posteriorSample > threshold) {
ID.new < max(dfMovies$ID, 0) + 1
posteriorSample < priorDistribution()
currentReward < posteriorSample
cumulativeReward < cumulativeReward + currentReward
dfMovies[ID.new,]$ID=ID.new; dfMovies[ID.new,]$MaxRating=posteriorSample; dfMovies[ID.new,]$CurrentRating=posteriorSample; dfMovies[ID.new,]$T.since.watch=0
dfWatched[t,]$ID=ID.new; dfWatched[t,]$New=TRUE; dfWatched[t,]$Total.unique=sum(dfWatched$New); dfWatched[t,]$CurrentRating=posteriorSample; dfWatched[t,]$Reward=cumulativeReward
} else {
ID.current < dfMovies[which.max(dfMovies$CurrentRating),]$ID
rating < dfMovies[which.max(dfMovies$CurrentRating),]$CurrentRating
dfMovies[which.max(dfMovies$CurrentRating),]$T.since.watch < 0
currentReward < rating
cumulativeReward < cumulativeReward + currentReward
dfWatched[t,]$ID=ID.current; dfWatched[t,]$New=FALSE; dfWatched[t,]$Total.unique=sum(dfWatched$New); dfWatched[t,]$CurrentRating=rating; dfWatched[t,]$Reward=cumulativeReward
}
}
tail(dfWatched)
plot(1:T, dfWatched$Total.unique / 1:T, ylab="Proportion of unique movies to total watches", xlab="Nth watch", main="Balance of new vs old movies over 30k sessions (~82 years)")
plot(dfWatched[!dfWatched$New,]$CurrentRating, ylab="Instantaneous movie rating", xlab="Nth watch", main="Average rating over 30k sessions (~82y)")
the simulation acts as expected. even with movies being ‘depleted’ and ‘gradually recovering’ over time (following a logistic curve), if you accumulate a big enough pool of great movies, you gradually explore less and less & rewatch more because the number of good movies recovering is steadily increasing
Posterior sampling eventually wins over various epsilongreedy 10–100% newmovie exploration strategies:
watchStrategy < function(f) {
T < 30000
dfMovies < data.frame(ID=integer(T/10), MaxRating=numeric(T/10), CurrentRating=numeric(T/10), T.since.watch=integer(T/10))
dfWatched < data.frame(ID=integer(T), Total.unique=integer(T), New=logical(T), CurrentRating=numeric(T), Reward=numeric(T))
currentReward < 0
cumulativeReward < 0
lastWatched < NA
priorDistribution < function() { min(10, rnorm(1, mean=7.03, sd=2)) } # based on my MAL ratings
logistic < function(max,t) { t<t/365; max * (1 / (1 + exp(1 * (t(0))))) }
for (t in 1:T) {
dfMovies$T.since.watch < dfMovies$T.since.watch+1
dfMovies$CurrentRating < logistic(dfMovies$MaxRating, dfMovies$T.since.watch)
posteriorSample < priorDistribution()
threshold < max(dfMovies$CurrentRating, 0)
if (f(posteriorSample, threshold)) {
ID.new < max(dfMovies$ID, 0) + 1
posteriorSample < priorDistribution()
currentReward < posteriorSample
cumulativeReward < cumulativeReward + currentReward
dfMovies[ID.new,]$ID=ID.new; dfMovies[ID.new,]$MaxRating=posteriorSample; dfMovies[ID.new,]$CurrentRating=posteriorSample; dfMovies[ID.new,]$T.since.watch=0
dfWatched[t,]$ID=ID.new; dfWatched[t,]$New=TRUE; dfWatched[t,]$Total.unique=sum(dfWatched$New); dfWatched[t,]$CurrentRating=posteriorSample; dfWatched[t,]$Reward=cumulativeReward
} else {
ID.current < dfMovies[which.max(dfMovies$CurrentRating),]$ID
rating < dfMovies[which.max(dfMovies$CurrentRating),]$CurrentRating
dfMovies[which.max(dfMovies$CurrentRating),]$T.since.watch < 0
currentReward < rating
cumulativeReward < cumulativeReward + currentReward
dfWatched[t,]$ID=ID.current; dfWatched[t,]$New=FALSE; dfWatched[t,]$Total.unique=sum(dfWatched$New); dfWatched[t,]$CurrentRating=rating; dfWatched[t,]$Reward=cumulativeReward
}
}
return(dfWatched$Reward)
}
posterior < watchStrategy(function (a,b) { a > b })
watch.1 < watchStrategy(function(a,b) { runif(1)<0.10})
watch.2 < watchStrategy(function(a,b) { runif(1)<0.20})
watch.3 < watchStrategy(function(a,b) { runif(1)<0.30})
watch.4 < watchStrategy(function(a,b) { runif(1)<0.40})
watch.5 < watchStrategy(function(a,b) { runif(1)<0.50})
watch.6 < watchStrategy(function(a,b) { runif(1)<0.60})
watch.7 < watchStrategy(function(a,b) { runif(1)<0.70})
watch.8 < watchStrategy(function(a,b) { runif(1)<0.80})
watch.9 < watchStrategy(function(a,b) { runif(1)<0.90})
watch.10 < watchStrategy(function(a,b) { runif(1)<1.00})
df < rbind(data.frame(Type="posterior", Reward=posterior, N=1:30000), data.frame(Type="10%", Reward=watch.1, N=1:30000), data.frame(Type="20%", Reward=watch.2, N=1:30000), data.frame(Type="30%", Reward=watch.3, N=1:30000), data.frame(Type="40%", Reward=watch.4, N=1:30000), data.frame(Type="50%", Reward=watch.5, N=1:30000), data.frame(Type="50%", Reward=watch.5, N=1:30000), data.frame(Type="60%", Reward=watch.6, N=1:30000), data.frame(Type="70%", Reward=watch.7, N=1:30000), data.frame(Type="80%", Reward=watch.8, N=1:30000), data.frame(Type="90%", Reward=watch.9, N=1:30000), data.frame(Type="100%", Reward=watch.10, N=1:30000))
library(ggplot2)
qplot(N, Reward, color=Type, data=df)
TODO:

use the rating resorter to convert my MAL ratings into a more informative uniform distribution

MAL average ratings for unwatched anime should be standardized based on MAL mean/SD (in part because the averages aren’t discretized, and in part because they are not comparable with my uniformized ratings)
Decay Period
hm. so let’s see empirical examples:

that Dresden Codak comic about Owlamoo, I don’t remember any of and that was almost exactly a decade ago.

I remembered very little of Doctor McNinja when I reread it in 2017 after starting it in 2008, so <9 years for most of it.

I just began a reread of, and really enjoying Yotsuba&!, and I read that sometime before 2011 according to my IRC logs, so that was at least 8 years. I also reread Catch22, Fourth Mansions, and Discovery of France relatively recently with similar results, but I don’t think I know when exactly I read them originally (at least 5 years for each)

in 2011 I happened to watch Memento but then my sister’s friend arrived after we finished and insisted we watch it again, so I watched Memento a second time; it was surprisingly good on rewatch.

The Tale of Princess Kaguya: 20160305, rewatched 20170429, and I was as impressed the second time

2019; randomly reminded of it, I reread The Snarkout Boys and the Avocado of Death which was a book I read in elementary school and thought was great, ~21 years before; but aside from a latenight movie theater being a major location, I remembered nothing else about it, not even what avocados had to do with anything (but I did enjoy it the second time)

2019, I rewatched Hero from ~2005; after 14 years, I had forgotten every twist in the plot other than the most barebone outline and a few stray details like ‘scenes have different color themes’ and that the hero is executed by archers.

2019, September: rewatched Redline from 8 years prior, late 2011 (20190923 vs 20111020); like Hero, I can remember only the most highlevel overview and nothing of the details of the plot or animation; the rewatch is almost as good as the first time

2019, November: rewatched Tatami Galaxy, first seen in April 2011, ~103 months previously; while I remembered the esthetic and very loosely some plot details like ‘a cult was involved’, major plot twists still took me by surprise

2019, November: rewatched Porco Rosso, second or third time, last seen sometime before 2009 or >120 months previously; vague recall of the first half, but forget entirely the second half

2020, January: reread “The Cambist and Lord Iron” fantasy short story, last read 2014, ~6 years or 72 months previously; of the 3 substories, I remembered the first one mostly, the second one partially, and the third one not at all.

2020, 17 April: rewatched Madame Butterfly Met HD opera broadcast; previously, 20191109, so ~161 days or ~5.2 months; recalled almost perfectly

2020, July: rewatched Azumanga Daioh, last seen ~2005 (so ~15 years or ~5400 days); of the first 2 episodes, I remembered about half the characters vaguely, and almost none of the jokes or plots, so effectively nearzero

20200916: “Chili and the Chocolate Factory: Fudge Revelation” fanfiction: read first chapter, skimmed another, noted that the chat log dialogues seemed familiar, and checked—I’d previously read the first 3 chapters on 20191223, 268 days / 8.6 months before, but forgot all of the plot/characters/details

20201114: rewatched Akhnaten opera, which I’d watched on 20111123 almost exactly 1 year before; recall of events very high.

202108: began catching up on Blade of the Immortal; I’d read up to volume 16 by ~2009, so a delay of ~11 years or 132 months; retention: aside from basic premise of the two main characters, nearzero—I didn’t even remember who the antagonist was, and was surprised by most of the first 5 volumes.

202108: realized Madoka passed its 10year anniversary; I had watched it during the original April 2021 airing & enjoyed it, so good time to revisit. I thoroughly enjoyed it, although my recall was contaminated by the fact that I’ve consumed a lot of ancillary Madokarelated material like the sequel movie Rebellion—so the overall plot remains quite clear in my mind, even if many of the details were hazy. (Madoka is also a series that relies heavily on foreshadowing & loops, so my rewatch brought me many surprises that I could not have appreciated the first time.)

20220825: revisted The Quantum Thief trilogy while sick; last read 201405, so interval of 8.25 years or 99 months; I had thought back then that it would be much more readable the second time when one has at least a basic understanding of Hannu’s worldbuilding which is presented in extreme in media res, which was true, but also like most of the instances so far, I remembered only the broadest outlines of the plot and the primary character, few of the secondary characters, and found it even more gripping than the first time (whatever damage familiarity does was compensated by being able to follow everything instead of being confused & mystified)

20221015: rewatched Blade Runner 2049 after watching in theaters 20171018 (1,824 days later, or 60 months or 5 years); remembered the overall but not the sequencing, the details of the twist, or if Harrison Ford dies at the end, so it was fairly gripping,
So this suggests that the full decay period is somewhere on the order of a decade, and the halflife is somewhere around a few years (if we figure that once a work has dropped to ~10% retention it’s basically gone, then for ~10 years, that’d imply ~3.3 year halflives). I could remember a fair amount of the 3 books but I don’t know what those intervals are. for the others, the intervals are all around 911 years, and retention is almost nearzero: of course I remember the characters in Yotsuba&! but I remember almost none of the stories and it’s practically brand new.
But, this decade after the log [started], I have the ability, if I want to, to recall almost any day in startling clarity. It’s all there. Well, okay, there are limits to this. The “startling clarity” extends back at least three years. Then it starts to get fuzzier. And some days are just going to be “same old, same old” routine, let’s face it. But the log makes a colossal difference.