我有一个
python
我正在运行的脚本
SLURM
#!/bin/bash
#SBATCH -p standard
#SBATCH -A overall
#SBATCH --time=12:00:00
#SBATCH --output=normalize_%A.out
#SBATCH --error=normalize_%A.err
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=20
#SBATCH --mem=240000
HDF5_DIR=...
OUTPUT_DIR=...
NORM_SCRIPT=...
norm_func () {
local file=$1
echo "$file"
python $NORM_SCRIPT -data $file -path $OUTPUT_DIR
}
# Doing normalization in parallel
for file in $HDF5_DIR/*; do norm_func "$file" & done
wait
python脚本只是加载一个数据集(
scRNAseq
.csv
文件。其中的一些主要代码行包括:
f = h5py.File(path_to_file, 'r')
rawcounts = np.array(rawcounts)
unique_code = np.unique(split_code)
for code in unique_code:
mask = np.equal(split_code, code)
curr_counts = rawcounts[:,mask]
# Actual TMM normalization
mtx_norm = gmn.tmm_normalization(curr_counts)
# Writing the results into .csv file
csv_path = path_to_save + "/" + file_name + "_" + str(code) + ".csv"
with open(csv_path,'w', encoding='utf8') as csvfile:
writer = csv.writer(csvfile, delimiter=',')
writer.writerow(["", cell_ids])
for idx, row in enumerate(mtx_norm):
writer.writerow([gene_symbols[idx], row])
step memory exceeded
以上数据集出错
10Gb
.slurm
脚本或
python
memory
问题是,在这种情况下是否有特殊的内存调试方法?如有任何建议,将不胜感激。