Experimentation Platform in Netflix
Orbit, Elliott, Intelligent Automation Platform and Interpretability
Libraries
Orbit open-sourced by Uber, is a Python package for Bayesian time series modeling and inference. It provides a familiar and intuitive initialize-fit-predict interface for working with time series tasks, while utilizing probabilistic programing languages under the hood.
Elliott is a comprehensive recommender systems that allows you to build recommendation systems models by providing a number of different modules.
Inference Policies is a repo for a set of rules for training and inference, part of MLCommons, to set standards for ML training and inference tasks.
Articles
Netflix wrote about how important A/B testing and overall experimentation to Netflix in this blog post. This posts follow the earlier posts in this series covered the basics of A/B tests (Part 1 and Part 2 ), core statistical concepts (Part 3 and Part 4), and how to build confidence in decisions based on A/B test results (Part 5).
PAIR from Google wrote a nice overview post on if the differential privacy and fairness can be applied together. They look at various tradeoffs such as accuracy and efficiency of the models when they combine them together.
Google wrote about a ML pipeline and how that can be implemented using Vertex.ai through GCP in the following blog post.
Airbnb wrote on the system architecture for Intelligent Automation Platform where it talks about how they create their workflow with no-code through FlowBuilder and then deploy this system. The post does not go in detail for ML models and training flows, but it is a good post that explains serving and inference part of the models.
Amazon wrote about how they use a ConvNet and attention mechanism to ensure that product images make sense for the product titles that they have.
Gradient published a great article for interpretability which has the following areas that it covers:
Papers
LabML has a number of papers per conferences in this page.