The filter bubble hypothesis has been theorized since 2011. The empirical evidence is, at this point, genuinely mixed - some studies find modest polarization effects from algorithmic feeds, others find that exposure to cross-cutting content can actually increase affective polarization rather than reduce it. A systematic review of the longitudinal evidence, with particular attention to study design quality and platform differences, is overdue.
The Guess et al. work on Facebook's own internal study is probably the most important recent data point — they found that algorithmic feed changes had minimal effect on political attitudes, which contradicts the popular narrative. But that's one platform, one election cycle.
The platform difference matters enormously — Twitter/X, TikTok, and Facebook have very different algorithmic architectures and very different user demographics. A review that aggregates across them without distinguishing is probably not answering any specific question well.
the finding that exposure to opposing views actually increases affective polarization is the one that I keep coming back to. intuition says more exposure = more understanding. data says otherwise. that gap deserves more attention.