Getting started with the I2BC cluster

cluster_i2bc

Case study 1 - QC analysis with FastQC

Instructions: This exercise is divided into 7 very detailed steps. If you’re already quite comfortable with scripting and schedulers, go directly to Exercise 3 (it’s the same steps, but it will also introduce you to for loops and job arrays) and follow up with Exercise 4 which will introduce you to using conda to install programmes on the cluster.

 

Context: We just sequenced the RNA of a sample. The sequencing plateform gave us the raw output file after sequencing in fastq format. We would like to have a first overview of the quality of the sequencing performed. In the following example, we will try to run the FastQC programme on this sequencing output. The FastQC programme analyses fastq files and outputs a quality control report in html format (more information on FastQC). It’s a small programme that doesn’t require a lot of resources and it’s already installed on the I2BC cluster.

 

Note: Files that go with the examples mentioned in this training session are in https://forge.i2bc.paris-saclay.fr/redmine/projects/partage-bioinfo/repository/cluster_usage_examples. You can access them by cloning the repository: https://forge.i2bc.paris-saclay.fr/git/partage-bioinfo/cluster_usage_examples.git

If you haven’t already got a session open on the Frontale (the master node) of the cluster, please do so as the rest of the steps are performed on the cluster. If you don’t know how to connect, don’t hesitate to refer to the previous section.

We will work in your home directory (see this page for more information on the file spaces accessible from the cluster). Let’s move to it and fetch our working files:

john.doe@cluster-i2bc:~$ cd /home/john.doe
john.doe@cluster-i2bc:/home/john.doe$ wget "https://zenodo.org/record/8340293/files/cluster_usage_examples.tar.gz?download=1" -O cluster_usage_examples.tar.gz
john.doe@cluster-i2bc:/home/john.doe$ tar -zxf cluster_usage_examples.tar.gz
john.doe@cluster-i2bc:/home/john.doe$ ls cluster_usage_examples/
example_fastqc  example_mafft  example_tmalign

In the example_fastqc folder, you’ll see a sequencing output in fastq format (the .gz extension suggests that it’s also compressed) on which we’ll run the FastQC programme.

john.doe@cluster-i2bc:/home/john.doe$ ls cluster_usage_examples/example_fastqc/
head1000_SRR9732589_1.fastq.gz

The FastQC programme executable is called fastqc, let’s see if we can find it in the modules:

john.doe@cluster-i2bc:/home/john.doe$ module avail -C fastqc -i
------------------------------------- /usr/share/modules/modulefiles --------------------------------------
fastqc/fastqc_v0.10.1 fastqc/fastqc_v0.11.5 singularity/fastqc 

So all we have to do is use: module load fastqc/fastqc_v0.11.5 in order to load FastQC.

Let’s investigate how to use the fastqc executable: How do we specify the inputs? What options or parameters can we use? What would the final command line look like to run fastqc on your input?

 

You can have a look in the documentation or you can experiment with the executable in an interactive session on one of the nodes:

john.doe@cluster-i2bc:/home/john.doe$ qsub -I
qsub: waiting for job 287169.pbsserver to start
qsub: job 287169.pbsserver ready

john.doe@node01:/home/john.doe$ module load fastqc/fastqc_v0.11.5

Of note:

    • with the qsub command, you are actually running a job on the cluster with a job identifier.
    • all jobs are dispatched to one of the available nodes of the cluster – in this case, we’re using node01 (NB: cluster-i2bc is the name of the Frontale). 

Most programmes come with help or usage messages that you can print on the screen using “man your_programme” or “your_programme --help“. Let’s see if we can access the help menu for fastqc:

john.doe@node01:/home/john.doe$ fastqc --help

            FastQC - A high throughput sequence QC analysis tool

SYNOPSIS

                    fastqc seqfile1 seqfile2 .. seqfileN

            fastqc [-o output dir] [--(no)extract] [-f fastq|bam|sam] 
                  [-c contaminant file] seqfile1 .. seqfileN

                                   [...]

So the basic usage of fastqc in our case would look like this (executable in red, fastq file in blue):

john.doe@node01:/home/john.doe$ fastqc /home/john.doe/cluster_usage_examples/example_fastqc/head1000_SRR9732589_1.fastq.gz

If we would like to specify the output folder, we can use the -o option but the folder you specify has to exist, for example:

john.doe@node01:/home/john.doe$ mkdir -p /home/john.doe.cluster_usage_examples/example_fastqc/results
john.doe@node01:/home/john.doe$ fastqc -o /home/john.doe.cluster_usage_examples/example_fastqc/results /home/john.doe/cluster_usage_examples/example_fastqc/head1000_SRR9732589_1.fastq.gz

At this point, we know everything there is to know about fastqc and its execution. We no longer need to be connected to a node and can now liberate the resources that we’ve blocked by disconnecting from it:

john.doe@node01:/home/john.doe$ logout

qsub: job 287169.pbsserver completed
john.doe@cluster-i2bc:/home/john.doe$ 

Of note: as you can see, the terminal prompt prefix changed again from node01 back to cluster-i2bc: we’ve returned to the Frontale of the cluster and the job we were running has terminated.

Best is to write the submission script in your example_fastqc subdirectory.

 

Let’s move to the example_fastqc subdirectory first:

john.doe@cluster-i2bc:/home/john.doe$ ls cluster_usage_examples/
example_fastqc  example_mafft  example_tmalign
john.doe@cluster-i2bc:/home/john.doe$ cd cluster_usage_examples/example_fastqc/

Now let’s write a script called pbs_script.sh in there.

 

In this example, we will write pbs_script.sh using the in-line text editor nano (but there are other possibilities such as vi, vim or emacs for example):

john.doe@cluster-i2bc:/home/john.doe/cluster_usage_examples/example_fastqc$ nano pbs_script.sh

This will create a file called pbs_script.sh in your current directory and will open up an in-line “window” to edit your file that looks like the screenshot below.


Nano_editor

Screenshot of the nano editor


About the nano text editor:
It’s in-line, you navigate through it with your arrow keys and you have a certain number of functionalities (e.g. copy-paste, search etc.) that are accessible through keyboard shortcuts (that are also listed on the bottom of your screen, ^ stands for the Ctrl key).
The main shortcuts are: Ctrl+S to save (^S) and Ctrl+X to exit (^X).
See the nano cheat sheet and tutorial for more information and shortcuts.



The PBS submission script is written like a common bash script (same language as the terminal). Write in this script all the commands (1 per line) that you would usually type in your terminal. The only particularity are the PBS submission options, that you can add directly to this script, commonly at the beginning (each parameter should be preceded by #PBS like in the example below).

#! /bin/bash

#PBS -N my_jobname     
#PBS -q common        
#PBS -l ncpus=1       

module load fastqc/fastqc_v0.11.5
# This is a comment line - it will be ignored when the script is executed cd /home/john.doe/cluster_usage_examples/example_fastqc/ #| #| These are your shell commands fastqc head1000_SRR9732589_1.fastq.gz #|

Explanation of the content:

  • #! /bin/bash: this is the “shebang”, it specifies the “language” of your script (in this case, the cluster understands that the syntax of this text file is “bash” and will execute it with the /bin/bash executable).
  • #PBS : All lines starting with #PBS indicate to the PBS job scheduler on the cluster that the following information is information related to the job submission. This is where you specify the qsub options such as your job name with -N or the queue you want to submit your job to with -q. There are many more options you can specify, see the intranet “Cheat sheet” tab.
  • module load will load the software you need (i.e. FastQC in this case)
  • cd /path/to/your/folder: by default, when you connect to the Frontale or the nodes, you land on your “home” directory (/home/john.doe). By moving to the directory which contains your input, you won’t need to specify the full path to the input, as you can see in the line of code that follows this statement.

When you exit the nano text editor, you should see the file created in your current directory:

john.doe@cluster-i2bc:/home/john.doe/cluster_usage_examples/example_fastqc$ ls
head1000_SRR9732589_1.fastq.gz   pbs_script.sh

To submit a pbs submission script, all you have to do is:

john.doe@cluster-i2bc:/home/john.doe/cluster_usage_examples/example_fastqc$ qsub pbs_script.sh
287170.pbsserver

This will print your attributed job id on the screen (287170 in this case).

You can follow the progression of your job with qstat:

john.doe@cluster-i2bc:/home/john.doe/cluster_usage_examples/example_fastqc$ qstat 287170.pbsserver
Job id           Name             User             Time Use S Queue
---------------- ---------------- ---------------- -------- - -----
287170.pbsserver my_jobname       john.doe         00:00:05 R common

Or to see all your jobs running and details on resources:

john.doe@cluster-i2bc:/home/john.doe/cluster_usage_examples/example_fastqc$ qstat -u john.doe -w
Job ID                         Username        Queue           Jobname         SessID   NDS  TSK   Memory Time  S Time
------------------------------ --------------- --------------- --------------- -------- ---- ----- ------ ----- - -----
287170.pbsserver               john.doe        common          my_jobname      3090738   1   1     2gb    02:00 R 00:05
287172.pbsserver               john.doe        common          my_job2         3090739   1   1     2gb    02:00 R 00:01

You can learn more about the options for qstat on the SICS website or in the manual (type man qstat, navigate with the up/down arrow keys and exit by typing q).

What files do we expect to see? There should be 4 new files in total:
  • FastQC should generate two files: an html file with the visual summary of the quality assessment of your fastq file and a zip folder that contains individual png images and result files.
  • the PBS scheduler should also generate two files with your jobname as prefix and the job identifier as suffix: one summarising the error log, the other the usual output log (it’s what’s usually printed on the screen that PBS captures in two separate files instead).
Your job shouldn’t take too long to finish, then you should be able to see the output files in your folder:
john.doe@cluster-i2bc:/home/john.doe/cluster_usage_examples/example_fastqc$ ls
head1000_SRR9732589_1.fastq.gz   head1000_SRR9732589_1_fastqc.zip   head1000_SRR9732589_1_fastqc.html   
my_jobname.e287170               my_jobname.o287170                 pbs_script.sh
You will see the output files generated by fastqc but also the log files generated by the PBS job scheduler to which the output and error messages that are normally printed on the screen are written (e=error, o=output).

Having issues? If you don’t have the output files, then there might be a problem in the execution somewhere. In that case, you can have a look at the two log files from PBS, especially the error file (*.e*).
john.doe@cluster-i2bc:/home/john.doe/cluster_usage_examples/example_fastqc$ cat my_jobname.e287170
Note that both log files are generated by default in the directory in which you ran the qsub command. There are options in qsub with which you can change this behaviour.

Typical error messages are for example:
  • -bash: fastqc: command not found:
    This is typical for commands that bash doesn’t know or cannot find. In this case, it might be because you forgot to specify the full path to the fastqc executable e.g. /opt/fastqc_v0.11.5/fastqc
  • Specified output directory 'nonexistantdir' does not exist:
    As stated, fastqc cannot find the output directory that was specified. It might be because you have to create it first or because it couldn’t find it in the working directory (make sure to specify the full path to that folder and check that you don’t have any typos)
  • Skipping 'nonexistant.fastq' which didn't exist, or couldn't be read:
    As stated, fastqc cannot find the input that you gave it. It could be linked to a typo in the name or could be because fastqc didn’t find it in your current working directory (keep in mind that by default, when you connect to the Frontale and the nodes, you land on your home directory and fastqc won’t find your inputs unless you move to the right directory or specify the full path to those files).

Analyse your actual resource consumption: How much memory did you effectively use while running the job? How long did your job take to finish? How much CPU percentage did you use?


This is useful to know in order to adapt the resources you ask for in future jobs with similar proceedings (e.g. other FastQC submissions). To see how much resource your job used, you can use qshow -j MY_JOB_ID or use qstat -fxw -G MY_JOB_ID (both commands are equivalent):

john.doe@cluster-i2bc:/home/john.doe/cluster_usage_examples/example_fastqc$ qshow -j 287170
Job Id: 287170.pbsserver
    Job_Name = my_jobname
    Job_Owner = john.doe@master.example.org
    resources_used.cpupercent = 16
    resources_used.cput = 00:00:04
    resources_used.mem = 79684kb
    resources_used.ncpus = 1
    resources_used.vmem = 2630856kb
    resources_used.walltime = 00:00:06
    job_state = F
    queue = common
    [...]
    Resource_List.mem = 2gb
    Resource_List.ncpus = 1
    Resource_List.nodect = 1
    Resource_List.place = pack
    Resource_List.preempt_targets = QUEUE=lowprio
    Resource_List.select = 1:mem=2gb:ncpus=1
    Resource_List.walltime = 02:00:00
    [...]

Answer?

  • Memory: It’s the amount of RAM memory the job is allocated.
    We reserved 2Gb by default (Resource_List.mem) but only used about 80Mb (resources_used.mem).
    For next time, we could consider asking for less memory, for example 1Gb instead of the 2Gb with -l mem=1Gb. This will leave more memory available for others on the cluster.
  • CPU percentage: It reflects how much of the CPU you used during your job (resources_used.cpupercent). For 1 CPU reserved, cpupercent can go from 0% (sub-optimal use) to 100% (optimal use). For N cpus, it can go up to N x 100%, if all CPUs are working full time. It’s an approximate measure of how efficiently the tasks are distributed over the CPUs.
    In our case, we only used 16% of the allocated CPU but we can’t ask for less than 1 CPU so there’s nothing to be done.
  • Wall time: it’s the maximum computation time given to a job. Beyond this time, your job will be killed, whatever it’s state.
    We reserved 2 hrs (Resource_List.walltime) but the job only took 5 seconds (resources_used.walltime).
    For next time, knowing that fastqc is very fast, we could put a wall time of 10 minutes for example with -l walltime=00:10:00.

Our adjusted job script pbs_script.sh could then look like this:

#! /bin/bash

#PBS -N my_jobname         
#PBS -q common              
#PBS -l ncpus=1            
#PBS -l mem=1gb            
#PBS -l walltime=00:10:00  

module load fastqc/fastqc_v0.11.5
# This is a comment line - it will be ignored when the script is executed cd /home/john.doe/cluster_usage_examples/example_fastqc/ #| #| These are your shell commands fastqc head1000_SRR9732589_1.fastq.gz #|
Scroll to Top