Jinghui Chen, Saket Sathe, Charu Aggarwal, and Deepak Turaga.
Varun Chandola, Arindam Banerjee, and Vipin Kumar.
Anomaly detection using one-class neural networks.
Raghavendra Chalapathy, Aditya Krishna Menon, and Sanjay Chawla.
Deep learning for anomaly detection: A survey.
Raghavendra Chalapathy and Sanjay Chawla.
Deep clustering for unsupervised learning of visual features.
Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze.
Financial crisis forecasting via coupled market state analysis. Coupling learning of complex interactions. Candès, Xiaodong Li, Yi Ma, and John Wright. On the evaluation of unsupervised outlier detection: Measures, datasets, and an empirical study. Campello, Barbora Micenková, Erich Schubert, Ira Assent, and Michael E. Campos, Arthur Zimek, Jörg Sander, Ricardo J. Exploration by random network distillation.
Yuri Burda, Harrison Edwards, Amos Storkey, and Oleg Klimov.
Large-scale study of curiosity-driven learning.
Yuri Burda, Harri Edwards, Deepak Pathak, Amos Storkey, Trevor Darrell, and Alexei A.
LOF: Identifying density-based local outliers. Outlier detection: Methods, models and classifications.
Azzedine Boukerche, Lining Zheng, and Omar Alfandi.
MVTec AD-A comprehensive real-world dataset for unsupervised anomaly detection.
Paul Bergmann, Michael Fauser, David Sattlegger, and Carsten Steger.
Representation learning: A review and new perspectives.
Yoshua Bengio, Aaron Courville, and Pascal Vincent.
GPU-accelerated feature selection for outlier detection using the local kernel density ratio.
Fatemeh Azmandian, Ayse Yilmazer, Jennifer G.
Risk assessment and risk management: Review of recent advances on their foundation. Wasserstein generative adversarial networks.
Martin Arjovsky, Soumith Chintala, and Léon Bottou.
Fast outlier detection in high dimensional spaces. Detecting outlying properties of exceptional objects.
Fabrizio Angiulli, Fabio Fassetti, and Luigi Palopoli.
Outlying property detection with numerical attributes.
Fabrizio Angiulli, Fabio Fassetti, Giuseppe Manco, and Luigi Palopoli.
Transfer representation-learning for anomaly detection. Clustering with deep learning: Taxonomy and new methods.
Elie Aljalbout, Vladimir Golkov, Yawar Siddiqui, Maximilian Strobel, and Daniel Cremers.
Graph based anomaly detection and description: A survey.
Leman Akoglu, Hanghang Tong, and Danai Koutra.
GANomaly: Semi-supervised anomaly detection via adversarial training.
Samet Akcay, Amir Atapour-Abarghouei, and Toby P.
Latent space autoregression for novelty detection.
Davide Abati, Angelo Porrello, Simone Calderara, and Rita Cucchiara.
Therefore, that CLT-based anomaly detection was necessary mainly for the first two cases. In my case, the data processing pipeline wasn’t definitive yet and even if the model’s re-training frequency had already been inferred, I wanted to monitor it on current data. Anomalous does not necessarily mean incoherent. For instance, if there is an economic crisis, most observations will have a sharp increase in their default score because their health has declined. Handling anomaliesĪnomalies can happen, among multiple reasons, because : - There is an anomaly in the data processing pipeline - The model is unstable or has to be re-trained - There is an external factor. A previous data analysis suggested that the model only had to be re-trained every year, but this anomaly analysis suggests that this should rather happen after 4 months. We can see from both the histogram and the line plot that the model shows signs of degradation after June 2021. Here, I kept a copy of my model and didn’t train it at all for several months.