X Algorithm GitHub Explained: How X Builds Your “For You” Feed End To End

X Algorithm GitHub Explained: How X Builds Your “For You” Feed End To End

Introduction

X finally did the thing every social platform says is “too sensitive” to share, it published real, runnable code for its For You feed stack. Not a glossy blog post. Not a diagram with inspirational arrows. A repo you can clone, skim, and argue with like an adult.

If your feed has ever made you mutter, “Why am I seeing this?” you already understand the stakes. People keep circling the same questions: Is this the whole system or just plumbing? Where are the weights? Why do links die? Why does rage bait sneak in when you never asked for it? Is there shadowbanning?

In that sense, X Algorithm is less a secret sauce and more a readable system, which is rare in social media.

This is a practical tour of what the repo shows, what it can’t show, and how to read it without turning everything into a conspiracy. We’ll walk the request path the way an engineer would, from candidate sourcing to final visibility filtering, then end with a short playbook you can actually use.

1. What Changed In 2026 And Why This Repo Matters

The big shift is that X Algorithm is now presented as a living codebase, with distinct components for retrieval, ranking, and filtering. The 2023 era disclosures were useful, but they aged fast because the real system kept evolving behind the scenes. This 2026 release is closer to an operational map. It tells you what happens during a feed request, not just what the company says happens.

Treat this like transparency with limits. You get pipeline logic. You don’t automatically get the trained model’s “brain” or the policy organization chart.

X Algorithm Repo Reality Check Table

A quick, readable map of what the open repo clarifies, and what it cannot prove.

X Algorithm table summarizing common questions, what the repo verifies, and what it cannot prove.
Question People AskWhat The Repo Helps You VerifyWhat It Still Can’t Prove
“Is this the real algo?”
The serving pipeline stages and component boundaries
Exact outcomes without weights, data, and replay tooling
“Where are the weights?”
Where weights plug into scoring
The numeric values if they’re not shipped
“Why am I seeing politics?”
How out of network retrieval enters ranking
The exact causal chain for a single post
“Is there shadowbanning?”
Where visibility filtering happens
Thresholds, classifiers, and moderation operations
“Why do links die?”
Whether there’s an explicit URL rule
The model’s learned engagement preferences

The repo also makes a strong claim: fewer hand engineered features, more learned behavior. That is a bet on models over rules, and it affects everything downstream.

2. What The Repo Contains, And What It Doesn’t

Here’s what X Algorithm exposes in plain sight:

  • Home Mixer: the orchestrator that assembles the feed.
  • Thunder: in network post storage and retrieval.
  • Phoenix: out of network retrieval plus ranking with a transformer.
  • candidate pipeline: a framework for Sources, Hydrators, Filters, Scorers, and Selection.

And here’s what you should not assume is included just because the code is public:

  • Transformer weights and embedding tables, unless explicitly shipped.
  • Training data, labeling, evaluation, and safety model pipelines.
  • Every policy decision that affects distribution across the platform.

So yes, X Algorithm is “the algorithm” in the sense that it reveals the serving machinery. No, it is not a complete, reproducible simulator of your feed without the missing artifacts.

3. System Architecture Tour, Home Mixer Runs The Show

X Algorithm Home Mixer pipeline infographic
X Algorithm Home Mixer pipeline infographic

A request through X Algorithm is a clean pipeline: hydrate user context, fetch candidates from multiple sources, enrich them, filter ineligible items, score with the model, apply diversity and out of network tuning, select top K, then run post selection visibility filtering.

Home Mixer matters because it is the choke point where all debates converge. If you want to ask “where would X do X,” the answer is usually “in Home Mixer, via the candidate pipeline stages.”

4. Candidate Sourcing Part 1, Thunder And In Network Posts

Thunder is the “accounts you follow” engine. It consumes post events, keeps recent content in memory, and returns in network candidates fast. Think of it as a high speed buffet. It doesn’t guarantee you eat everything, it just makes it possible to put the right plates on the table.

This is why “my followers don’t see my posts” has multiple explanations inside X Algorithm. Your post can be retrieved, then lose on scoring, lose on diversity attenuation, or get filtered out for eligibility. Retrieval is necessary, it’s not sufficient.

5. Candidate Sourcing Part 2, Phoenix Retrieval And Out Of Network Discovery

Out of network is where feeds turn into discovery engines. Phoenix retrieval is described as a two tower approach: a user embedding from your history, post embeddings from the global corpus, then top K via similarity search.

This is the most common source of “I’m seeing content outside my interests.” In X Algorithm, the model learns from your actions, not your stated preferences. A long dwell on a thread you hate still looks like attention. Attention is training signal. Training signal becomes retrieval neighbors.

That’s not moral judgment, it’s just geometry.

6. Candidate Hydration, Making Posts Legible To The Ranker

Hydration shows up again, now for candidates. The system fetches core metadata, author info, media entities, video duration, subscription status, and similar context.

This stage matters for the “links are punished” debate. X Algorithm doesn’t need a hand coded URL penalty if the model can see “this is a link post” and it has learned that link posts tend to produce less on platform engagement. Hydration makes post type explicit. The model does the rest.

7. Pre Scoring Filters, The Quiet Place Where Posts Disappear

If ranking feels like probability, filtering feels like a hard no.

The repo lists filters that drop duplicates, old posts, self posts, blocked or muted authors, muted keywords, already seen items, already served items, and ineligible subscription content. In other words, the platform has a ton of ways to remove content before the model even scores it.

Here’s the important distinction: filtering is eligibility, moderation is policy enforcement, ranking is ordering. X Algorithm has all three concepts, and confusing them is how people talk past each other.

8. Ranking Core, Phoenix Uses A Grok Based Transformer

X Algorithm Phoenix ranking and attention masking
X Algorithm Phoenix ranking and attention masking

Phoenix scoring is described as a transformer ported from the xAI Grok open source release, adapted for recommendation. That link is not just branding. It implies shared architectural ideas: sequence modeling, attention, and learned representation, pointed at engagement prediction instead of next token prediction.

The extra engineering detail that screams “production system” is candidate isolation. Candidates cannot attend to each other. Attention masking keeps scores stable, so a post doesn’t get a different score just because it was batched next to a spicy neighbor. Stability makes caching and debugging possible, and it makes the system feel less haunted.

So when people say X Algorithm is “just plumbing,” point them here. This is the core ranking brain, at least at the interface level.

9. Multi Action Prediction, Replies Are Not The Whole Story

The model predicts probabilities for many actions: likes, replies, reposts, clicks, dwell, follows, and also negative actions like not interested, block, mute, and report. Then a weighted scorer turns that probability vector into a single score.

This answers two recurring questions about X Algorithm:

  • Yes, blocks, mutes, and reports can be explicitly modeled, and can push a post down.
  • No, “the score” is not one magic relevance number, it’s a weighted blend of many forecasts.

Are replies weighted more than likes? The code can show where that decision lives, in the weight vector. If the weights aren’t shipped, you can’t settle the argument from the repo alone. But you can now point to the exact place where the product philosophy becomes math.

10. Weighted Scoring, Diversity, And Out Of Network Tuning

After scoring, additional shaping happens. The repo describes an Author Diversity Scorer that attenuates repeated authors, plus an out of network scorer that adjusts OON content.

This is where “stay in your niche” collides with “keep the feed from becoming one account’s megaphone.” X Algorithm is balancing two goals: relevance and variety. If you only optimize relevance, you get monotony. If you only optimize variety, you get whiplash.

Diversity isn’t censorship. It’s a guardrail. It can still feel annoying, because guardrails are annoying when you’re driving fast.

11. Post Selection Visibility Filtering, Safety And The Shadowban Question

Visibility filtering runs after selection. It removes content that should not be shown for reasons like deletion, spam, violence, and gore, plus other policy categories.

This is where the “shadowban” conversation should become precise. X Algorithm can hide content in at least three ways:

  1. Filter it out before scoring.
  2. Rank it low enough that it never reaches top K.
  3. Remove it after selection via visibility filtering.

Those three outcomes feel identical on the client. They are not the same mechanism.

Also, pipeline code alone can’t prove sweeping suppression claims. Policy thresholds, classifier behavior, and enforcement ops often live outside the visible serving logic. Open code raises the ceiling on transparency. It doesn’t remove the need for observability.

12. Practical Takeaways, Fix Your Feed And Post With The Pipeline In Mind

X Algorithm reset steps on phone and checklist
X Algorithm reset steps on phone and checklist

If you want levers, you need to act where X Algorithm learns: your explicit feedback and your attention patterns.

12.1 Resetting The Feed Without Magic Buttons

People search for how to reset x algorithm and reset x algorithm because the feed drifts. The realistic reset is simple and a little annoying, you retrain it with better signals:

  • Tap “Not interested” early, and often.
  • Mute keywords that keep resurfacing.
  • Block or mute repeat offenders.
  • Stop doom scrolling, long dwell is still a vote.
  • Follow more of what you actually want, in network needs a good follow graph.

Those actions map cleanly to the negative targets in scoring and the filters in the pipeline.

Links often underperform because they produce fewer likes and shorter dwell. Rage bait often overperforms because it produces replies and long dwell, even from people who hate it. That’s not a secret switch. It’s optimization pressure.

If you post links, front load context, add a visual, ask a specific question. If you want less rage bait, stop feeding it attention and use explicit negative feedback.

12.3 Complaint To Pipeline Map

X Algorithm Troubleshooting Cheatsheet

Match what you’re seeing to the most likely pipeline stage, then try the simplest lever first.

X Algorithm table mapping common feed issues to the likely pipeline stage and practical fixes.
What You NoticeMost Likely Stage In X AlgorithmWhat To Try
“I never see someone I follow”
Scoring or diversity attenuation
Engage with them, check mutes, enrich your follows
“Out of network took over”
Phoenix retrieval
Use “Not interested,” stop dwell scrolling
“Links flop”
Learned action prediction
Add context, visuals, invite replies
“I think I’m shadowbanned”
Filters, scoring, or visibility filtering
Check blocks and keywords, test with controlled audiences

If you came here searching how does x algorithm work, you now have the actual flow. If you came here for x algorithm open source, the answer is yes, and the useful part is the pipeline, not the mythology.

Closing, Read X Algorithm Like A System, Not A Spell

The healthiest way to think about X Algorithm is as an assembly line that turns your behavior into forecasts, then turns forecasts into a ranked feed. In network retrieval gives you familiar voices. Out of network retrieval gives you discovery. Filters enforce eligibility. Ranking predicts actions. Diversity shapes the mix. Visibility filtering enforces safety.

That means you have more agency than it feels like you do. Your clicks, dwell, blocks, mutes, and “Not interested” taps are literally part of the model’s target space.

If you’re a builder, read the code and trace a request through Home Mixer and the candidate pipeline. If you’re a creator, design posts that earn the action profile you want. If you’re a power user, stop rewarding content you dislike with attention.

Want more breakdowns like this, tuned for engineers who still have a life? Subscribe, send the next repo drop, and I’ll dissect it with the same “show me the stage where it happens” mindset.

Home Mixer: The orchestration service that assembles the For You feed by running the full pipeline.
Thunder: The in-memory system that serves “in-network” posts from accounts you follow.
Phoenix: The ML component for out-of-network retrieval and transformer-based ranking.
In-Network: Content from accounts you follow, typically retrieved via Thunder.
Out-of-Network: Discovery content from outside your follows, retrieved via Phoenix.
Two-Tower Retrieval: A retrieval setup with one model embedding the user and another embedding posts, matched via similarity.
Embedding: A numeric vector representation of a user or post used for similarity search and modeling.
Candidate Pipeline: The stage-based framework that runs Sources, Hydrators, Filters, Scorers, and Selection.
Hydration: Enriching a post with metadata (author info, media details, subscriptions) before filtering and ranking.
Pre-Scoring Filters: Rules that remove ineligible posts before the model scores them (duplicates, mutes, blocks, already-seen).
Grok-Based Transformer: A transformer model style associated with xAI Grok, adapted here for recommendation scoring.
Attention Masking: A transformer constraint that controls what tokens can “see” each other during attention.
Candidate Isolation: The design where each candidate post is scored independently of other candidates in the batch.
Weighted Scoring: Combining multiple predicted action probabilities into one score using a weight per action.
Visibility Filtering: Post-selection enforcement that removes content based on safety/policy categories (spam, deleted, violence, etc.).

1) Is X Algorithm open source?

Yes. X has published the core “For You” feed pipeline code on GitHub under xai-org/x-algorithm, covering retrieval, ranking, and filtering stages.

2) How does the X Algorithm work in plain English?

X Algorithm builds a candidate set from accounts you follow (Thunder) plus discovery posts (Phoenix retrieval), hydrates and filters them, then ranks using a Grok-style transformer that predicts engagement actions, and finally applies post-selection visibility filters.

3) What is the candidate pipeline in X Algorithm?

The candidate pipeline is the reusable framework that runs the feed as stages: Sources fetch candidates, Hydrators enrich them, Filters remove ineligible items, Scorers assign scores, and a Selector returns the top results.

4) What is “attention masking” in X Algorithm?

It’s the transformer trick that prevents candidates from attending to each other during ranking. Each post is scored based on user context, not on what else happened to be in the batch, which keeps scoring stable (candidate isolation).

5) How to reset X Algorithm (and actually change your For You feed)?

There’s no single “factory reset,” but you can reset x algorithm behavior by changing the signals it learns from:
Use Not interested on unwanted posts early and often
Mute keywords and mute/block accounts you don’t want
Clean up Interests and Topics in settings
Spend time in Following to rebuild in-network signals
Stop rage-scrolling, long dwell still trains the model

Leave a Comment