Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When the number of cell clusters is large, the memory may be insufficient for calculating flux. Are there any solutions to this problem? #15

Open
MOSAIFENG opened this issue Sep 19, 2024 · 1 comment

Comments

@MOSAIFENG
Copy link

MOSAIFENG commented Sep 19, 2024

The following is the code I used.

library(Seurat)
saveRDS(seurat_obj, file="XXXX.rds")

library(METAFlux)
data("human_blood")

seurat_obj <- readRDS("/home/saifeng/cross-talk/METAFlux/scanpy4_seurat_obj.rds")
mean_exp=calculate_avg_exp(myseurat = seurat_obj,myident = 'cell_type',n_bootstrap=3,seed=1)

scores <- calculate_reaction_score(data = mean_exp)

cell_fractions <- round(table(seurat_obj$cell_type) / nrow([email protected]), 3)
flux <- compute_sc_flux(num_cell = 24, fraction = cell_fractions, fluxscore = scores, medium = human_blood)
Preparing for TME S matrix.....
Error: cannot allocate vector of size 742.2 Gb

@seigfried
Copy link

flux=compute_sc_flux(num_cell = nrow(test),
+                      fraction = test$Fraction,
+                      fluxscore=scores,
+                      medium = cell_medium)

Preparing for TME S matrix.....
Error: cannot allocate vector of size 1316.0 Gb

In my case I had an object of 25337 cells only, I had 16 celltypes and 2 conditions hence 32 columns in scores with a testing bootstrap of 1. How can I adjust memory and run this? Will try running 1 celltype at a time and see if it works

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants