Category Encoders

A set of scikit-learn-style transformers for encoding categorical variables into numeric with different techniques.

Category Encoders is a Python library for encoding categorical variables for machine learning tasks. It is available on contrib.scikit-learn.org and extends the capabilities of scikit-learn’s preprocessing module.

The library provides several powerful encoding techniques for dealing with categorical data, including:

  • Ordinal encoding: maps categorical variables to integer values based on their order of appearance
  • One-hot encoding: creates a binary feature for each category in a variable
  • Binary encoding: maps each category to a binary code
  • Target encoding: encodes each category with the mean target value for that category
  • Hashing encoding: maps each category to a random index in a hash table

Category Encoders also supports a range of advanced features, such as handling missing values, combining multiple encoders, and applying encoders to specific subsets of features.

Overall, Category Encoders is a useful tool for preprocessing categorical data and improving the accuracy and performance of machine learning models.

Yellowbrick: Machine Learning Visualization

https://www.scikit-yb.org/en/latest/

Feature Visualization

Classification Visualization

Regression Visualization

Clustering Visualization

Model Selection Visualization

Target Visualization

  • Balanced Binning Reference: generate a histogram with vertical lines showing the recommended value point to bin the data into evenly distributed bins
  • Class Balance: see how the distribution of classes affects the model
  • Feature Correlation: display the correlation between features and dependent variables

Text Visualization

TabTransformer: Tabular Data Modeling Using Contextual Embeddings

The main idea in the paper is that the performance of regular Multi-layer Perceptron (MLP) can be significantly improved if we use Transformers to transforms regular categorical embeddings into contextual ones.

The TabTransformer is built upon self-attention based Transformers. The Transformer layers transform the embed- dings of categorical features into robust contextual embed- dings to achieve higher prediction accuracy.