90/30 Club (ML reading) #50: LeJEPA: Provable and Scalable Self-Supervised Learning Without the Heuristics
Hosted on Luma
Fetched about 4 hours ago
Tuesday, April 28, 2026
to Tuesday, April 28, 2026
AI
Event Type
in person
57
Participants
5
Est. Projects
Organizers
Alex Johnson
alex@example.org
Jamie Rivera
jamie@example.org
Sam Chen
sam@example.org
Quality Score
Quality Score
72/100
High confidence
Organiser16/20
Event Maturity14/20
Sponsors18/25
Participants12/20
Week 50: LeJEPA: Provable and Scalable Self-Supervised Learning Without the Heuristics
Paper LinkMost self-supervised learning methods work by carefully balancing instability. Remove stop-gradients, momentum encoders, or augmentation tricks, and they collapse. We’ve gotten strong results, but not a clean understanding of whythey work.LeJEPA pushes in the opposite direction: instead of stabilizing training with heuristics, it builds a system where the objective itself prevents collapse.The core idea is simple but sharp: learn representations by predicting latent structure across views, while designing the objective so that trivial solutions (e.g., constant embeddings) are provably suboptimal. This removes the need for asymmetric updates used in methods like BYOL or contrastive negatives from SimCLR.
Join us at Mox to explore:
What fundamentally causes collapse in self-supervised learning and why most existing methods need architectural asymmetry to avoid it.Why predicting in latent space changes the game: moving away from pixel reconstruction or contrastive alignment toward structured representation prediction.
🔎Analyzed Papers
​Discussion at 20:00, (optional) quiet reading from 19:00.
Operations12/15
Why this score
Strong organiser track record
Returning event
Well-sponsored
Missing data
Prize details
Code of conduct
90/30 Club (ML reading) #50: LeJEPA: Provable and Scalable Self-Supervised Learning Without the Heuristics | Hackathon Radar