Description:
Test-Time Training is a general approach for improving the performance of predictive models when training and test data come from different distributions. A single unlabeled test sample is turned into a self-supervised learning problem, on which the model parameters are updated before making a prediction. This also extends naturally to data in an online stream. This simple approach leads to improvements on diverse image classification benchmarks aimed at evaluating robustness to distribution shifts.
In this video, I talk about the following: What is Test-Time Training? How does Test-Time Training perform?
For more details, please look at https://proceedings.mlr.press/v119/sun20b/sun20b.pdf
Sun, Yu, Xiaolong Wang, Zhuang Liu, John Miller, Alexei Efros, and Moritz Hardt. "Test-time training with self-supervision for generalization under distribution shifts." In International conference on machine learning, pp. 9229-9248. PMLR, 2020.
Thanks for watching!
LinkedIn: http://aka.ms/manishgupta
HomePage: https://sites.google.com/view/manishg/
Share this link via
Or copy link























