Datasets:

Modalities:
Text
Formats:
webdataset
ArXiv:
Libraries:
Datasets
WebDataset
License:
MBD / README.md
wolgraff's picture
Update README.md
488bd68 verified
|
raw
history blame
No virus
6.94 kB
---
license: cc-by-4.0
configs:
- config_name: client_split
data_files: "client_split.tar.gz"
- config_name: detail
data_files: "detail.tar.gz"
- config_name: ptls
data_files: "ptls.tar.gz"
- config_name: targets
data_files: "targets.tar.gz"
---
# Intro
Predicting a customer's propensity to purchase a product is an important task for many companies, helping to:
- assess the customer's needs, form their product profile;
- improve the quality of recommendations, form package offers, form individual conditions;
- correctly form a communication strategy with the customer
- estimate the income that the customer can bring to the company in the future, based on the profitability of the products in which he is interested (Customer lifetime value - CLTV).
To solve such problems, various data about the customer are usually used:
- customer profile;
- history of previous purchases and communications;
- transactional activity;
- geo-information about places of permanent or temporary residence;
- etc.;
Of particular importance are the data characterizing the patterns of client behavior (chains of events), as they help to understand the patterns in the client's actions, to assess the dynamics of changes in his behavior. The combined use of behavioral data from various sources helps to more fully describe the client in terms of predicting his needs, which, in turn, creates the task of the most effective combination of various modalities to improve the performance and quality of the developed model.
# Data
The dataset consists of anonymized historical data, which contains the following information: transaction activity (transactions), dialog embeddings (dialogs), geo-activity (geostream) for some of the Bank's clients over 12 months.
Objective: To predict for each user the taking/not taking of each of the four products within a month after the reporting date, historical data for them is in targets
```
client_split Desc: Splitting clients into folds
|-- client_id: str Desc: Client id
|-- fold: int
detail
|-- dialog Desc: Dialogue embeddings
|-- client_id: str Desc: Client id
|-- event_time: timestamp Desc: Dialog's date
|--embedding: array float Desc: Dialog's embeddings
|-- geo Desc: Geo activity
|-- client_id: str Desc: Client id
|-- event_time: timestamp Desc: Event datetime
|-- fold: int
|-- geohash_4: int Desc: Geohash level 4
|-- geohash_5: int Desc: Geohash level 5
|-- geohash_6: int Desc: Geohash level 6
|-- trx Desc: Transactional activity
|-- client_id: str Desc: Client id
|-- event_time: timestamp Desc: Transaction's date
|-- amount: float Desc: Transaction's amount
|-- fold: int
|-- event_type: int Desc: Transaction's type
|-- event_subtype: int Desc: Clarifying the transaction type
|-- currency: int Desc: Currency
|-- src_type11: int Desc: Feature 1 for sender
|-- src_type12: int Desc: Clarifying feature 1 for sender
|-- dst_type11: int Desc: Feature 1 for contractor
|-- dst_type12: int Desc: Clarifying feature 1 for contractor
|-- src_type21: int Desc: Feature 2 for sender
|-- src_type22: int Desc: Clarifying feature 2 for sender
|-- src_type31: int Desc: Feature 3 for sender
|-- src_type32: int Desc: Clarifying feature 3 for sender
ptls Desc: Data is similar with detail but in pytorch-lifestream format https://github.com/dllllb/pytorch-lifestream
|-- dialog Desc: Dialogue embeddings
|-- client_id: str Desc: Client id
|-- event_time: Array[timestamp] Desc: Dialog's date
|-- embedding: Array[float] Desc: Dialog's embedding
|-- geo Desc: Geo activity
|-- client_id: str Desc: Client id
|-- event_time: Array[timestamp] Desc: Event datetime
|-- fold: int
|-- geohash_4: Array[int] Desc: Geohash level 4
|-- geohash_5: Array[int] Desc: Geohash level 5
|-- geohash_6: Array[int] Desc: Geohash level 6
|-- trx Desc: Transactional activity
|-- client_id: str Desc: Client id
|-- event_time: Array[timestamp] Desc: Transaction's date
|-- amount: Array[float] Desc: Transaction's amount
|-- fold: int
|-- event_type: Array[int] Desc: Transaction's type
|-- event_subtype: Array[int] Desc: Clarifying the transaction type
|-- currency: Array[int] Desc: Currency
|-- src_type11: Array[int] Desc: Feature 1 for sender
|-- src_type12: Array[int] Desc: Clarifying feature 1 for sender
|-- dst_type11: Array[int] Desc: Feature 1 for contractor
|-- dst_type12: Array[int] Desc: Clarifying feature 1 for contractor
|-- src_type21: Array[int] Desc: Feature 2 for sender
|-- src_type22: Array[int] Desc: Clarifying feature 2 for sender
|-- src_type31: Array[int] Desc: Feature 3 for sender
|-- src_type32: Array[int] Desc: Clarifying feature 3 for sender
targets
|-- mon: str Desc: Reporting month
|-- target_1: int Desc: Mark of product issuance in the first reporting month
|-- target_2: int Desc: Mark of product issuance in the second reporting month
|-- target_3: int Desc: Mark of product issuance in the third reporting month
|-- target_4: int Desc: Mark of product issuance in the fourth reporting month
|-- trans_count: int Desc: Number of transactions
|-- diff_trans_date: int Desc: Time difference between transactions
|-- client_id: str Desc: Client id
```
# Load dataset
## Download a single file
Download a single file with datasets
```python
from datasets import load_dataset
dataset = load_dataset("ai-lab/MBD", 'client_split')
```
Download a single file with huggingface_hub
```python
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="ai-lab/MBD", filename="client_split.tar.gz", repo_type="dataset")
# By default dataset is saved in '~/.cache/huggingface/hub/datasets--ai-lab--MBD/snapshots/<hash>/'
# To overwrite this behavior try to use local_dir
```
## Download entire repository
Download entire repository with datasets
```python
from datasets import load_dataset
dataset = load_dataset("ai-lab/MBD")
```
Download entire repository with huggingface_hub
```python
from huggingface_hub import snapshot_download
snapshot_download(repo_id="ai-lab/MBD")
# By default dataset is saved in '~/.cache/huggingface/hub/datasets--ai-lab--MBD/snapshots/<hash>/'
# To overwrite this behavior try to use local_dir
```
# Citation
We have a [paper](https://arxiv.org/abs/2002.08232) you can cite it:
```
@inproceedings{
Babaev_2022, series={SIGMOD/PODS ’22},
title={CoLES: Contrastive Learning for Event Sequences with Self-Supervision},
url={http://dx.doi.org/10.1145/3514221.3526129},
DOI={10.1145/3514221.3526129},
booktitle={Proceedings of the 2022 International Conference on Management of Data},
publisher={ACM},
author={Babaev, Dmitrii and Ovsov, Nikita and Kireev, Ivan and Ivanova, Maria and Gusev, Gleb and Nazarov, Ivan and Tuzhilin, Alexander},
year={2022},
month=jun, collection={SIGMOD/PODS ’22}
}
```