File size: 1,555 Bytes
75563bb
 
50bedf1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dc1f050
 
 
 
 
 
 
 
 
 
 
75563bb
dc1f050
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
license: cc-by-nc-nd-4.0
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
  - split: val
    path: data/val-*
dataset_info:
  features:
  - name: input_ids
    dtype: string
  - name: cell_type
    dtype: string
  splits:
  - name: train
    num_bytes: 2314316937
    num_examples: 218732
  - name: test
    num_bytes: 288846799
    num_examples: 27388
  - name: val
    num_bytes: 289505418
    num_examples: 27382
  download_size: 2322876358
  dataset_size: 2892669154
task_categories:
- text-generation
- question-answering
language:
- en
tags:
- biology
- pytorch
- causal-lm
size_categories:
- 100K<n<1M
---


# Overview

Cell2Sentence is a novel method for adapting large language models to single-cell transcriptomics. 
We transform single-cell RNA sequencing data into sequences of gene names ordered by expression level, termed "cell sentences". 
This dataset was constructed from the immune tissue dataset in [Domínguez et al.](https://www.science.org/doi/10.1126/science.abl5197), 
and it was used to train the [Pythia-160m model](https://huggingface.co/EleutherAI/pythia-160m) capable of generating complete cells described in our paper. 
Details about the Cell2Sentence transformation and preprocessing pipeline can be found in our paper and GitHub repo linked below.

GitHub: <https://github.com/vandijklab/cell2sentence-ft>  
Paper: <https://www.biorxiv.org/content/10.1101/2023.09.11.557287v3>  
Model Card: <https://huggingface.co/vandijklab/pythia-160m-c2s>