Financial Explaining Anomalies
- Initial paper :https://arxiv.org/pdf/2209.10658.pdf
- Code: https://github.com/topics/denoising-autoencoders
- Kaggle example : kaggle Notebook
- Bundesbank (2023) use case: Bundesbank (2023) paper
Financial Explaining Anomalies
The main idea in the paper is that the performance of regular Multi-layer Perceptron (MLP) can be significantly improved if we use Transformers to transforms regular categorical embeddings into contextual ones.
The TabTransformer is built upon self-attention based Transformers. The Transformer layers transform the embed- dings of categorical features into robust contextual embed- dings to achieve higher prediction accuracy.
Are deep learning models superior ?
Notebook Examples on EDA (Automated Exploratory Data Analysis) and AutoML
Python module to perform under sampling and over sampling with various techniques
Library: https://imbalanced-learn.org/