Conditional Memory And DeepSeek Engram: When Lookup Beats More Compute

Conditional Memory hero image with DeepSeek Engram dashboard

Watch or Listen on YouTube Conditional Memory And DeepSeek Engram: When Lookup Beats More Compute Introduction Bigger models keep winning, but the reason is not always “more intelligence.” Sometimes it is just less wasted work. The Engram paper makes an almost irritatingly sensible point. Transformers do two jobs at once: they remember stable patterns, and … Read more

Deep Delta Learning Explained: The Delta Residual Block That Turns ResNet Shortcuts Into Learnable Geometry

deep delta learning cover showing learnable ResNet shortcut geometry

Watch or Listen on YouTube Deep Delta Learning Explained: Introduction ResNet’s big trick is almost insultingly simple. Add an identity shortcut, stop gradients from dying, train deeper models. The shortcut is a safety rail, and in deep residual learning for image recognition it turned “very deep” from a heroic gamble into a default option. That … Read more

AI in scientific research: Stop Calling It “Slop”, The Data Says Discovery Is Getting Democratized

AI in scientific research cover about trust and output.

Watch or Listen on YouTube The Great Swap: Trust in AI Science Introduction A funny thing happened after ChatGPT shipped. The average paper got easier to read. Not “more correct,” not “more insightful.” Just smoother. The kind of prose that used to signal a careful researcher suddenly became the default setting. That’s why the “AI … Read more