MLOps Newsletter

Home
Notes
Archive
Leaderboard
About

Sitemap - 2021 - MLOps Newsletter

Gopher(280B Parameter language model)

Mixture of Experts in Training

Graph Neural Networks

Composable Models, Model Ensembles

Reddit Serving Infrastructure

Kaggle Survey of Data Scientists

Differential Private Clustering

Stateof.ai, Collaborative Training

Sustainable.ai, Parallelformers, Model Optimization

Graph Neural Networks, Neural Machine Translation

BERT Busters

Tesla AI Day

Wide and Deep Learning Recommender Systems

Graph Neural Networks

How to CI/CD ML Models

Distributed Training and HyperParameter Optimization

Compositional Object Classifier

CVPR and ICLR Keynote videos

Neural Network Quantization and TRILL

Do Wide and Deep Networks Learn the Same Things?

Attention is not all you need

Jax, PyTorchVideo and FeatureStore

DINO, SEER and Accelerate

Factorized Layers for Compression

Video Encoding through machine learning

Open Domain Question Answering

How to lead paper reading sessions

3D Scene Understanding

Generalization of Deep Learning Models

Sparsification and TVM AutoScheduler

Multi-Modal Neurons

Taming Transformers

Unlearning, S4TF and Model Search

A call to new notation

Neural Based Decompilers

MLOps::Relational Inductive Biases

Can language models can understand language?

Controllable Text Generation

Learned Indices and Software 2.0

Transformers is all you need

© 2025 Substack Inc
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share