Run GAPSTOP™#

Folder structure#

To use the provided tm_param.star without any significant changes you should have the following folder structure:

tm_tutorial/
├── microtubule/
│   ├── microtubule.em
|   ├── mask_microtubule.em
├── angles_5_c13.txt
├── wedge_list.star
├── tm_param.star
└── 126_b4.mrc

The only parameter that has to be changed is the rootdir parameter that should contain a valid path to the tm_tutorial folder (including the tm_tutorial).

Note that the folder can contain more files so you do not need to delete anything from that folder.

It is not necessary to create the folder for the outputs - if it does not exist prior running the TM it will be create on the fly.

Run TM#

Depending on your local installation you might need to activate the environment where the GAPSTOP™ is installed:

# assuming the environment is called gapstop
source /path/to/the/gapstop/bin/activate

Once you have all the inputs prepared and you set the correct paths you can run the TM directly:

gapstop run_tm -n 8 tm_param.star

The “n” parameter specifies the number of GPUs on which the TM should run.

Run TM on SLURM cluster#

Here is an example of the bash script to run the GAPSTOP™ on a SLURM cluster - please note that some parameters such as “constraint” and “gres” are specific to your SLURM cluster setup and you should use/change them accordingly.

Example of tm_submit.sh script#
#!/bin/bash -l
#SBATCH -o log_file
#SBATCH -e err_file
#SBATCH -D /path/to/working/directory/
#SBATCH -J tm_microtubule
#SBATCH --time=02:00:00
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=4
#SBATCH --cpus-per-task=18
#SBATCH --constraint="gpu"
#SBATCH --gres=gpu:a100:4
#SBATCH --gpu-bind=verbose,per_task:1

export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
export OMP_PLACES=cores

source /path/to/the/gapstop/bin/activate
srun gapstop run_tm -n 8 tm_param.star

In this particular setup there is 4 GPUs per node and thus the total number of tiles is 8. This has to be adapted based on the SLURM specific setup (e.g in case there are only 2 GPUs per node the –ntasks-per-node will be 2 and the “n” should be either 4 or the number of nodes needs to be increased to 4 by setting –nodes=4).

To submit such script on SLURM cluster run:

chmod +x tm_submit.sh
sbatch tm_submit.sh

Results#

The results will be stored in the tm_outputs folder: scores_0_126.em, angles_0_126.em and 0.log. See the section on results_eval <results_eval.html> to find out out how to proceed with their analysis.