Pages

What do different color of the files mean in Linux

When you list using 'ls' command in Linux terminal, you may see different color for different files. Each color represent a different format as follow. You can change the color by editing ~/.bashrc file which will be discussed in detail.
  • Blue: Directory
  • Green: Executable or recognized data file
  • Sky Blue: Symbolic link file
  • Yellow with black background: Device
  • Pink: Graphic image file
  • Red: Archive file
  • Red with black background: Broken link

For more details, see this post in Stack Exchange site Unix & Linux.

Intel MKL Libraries (Some Notes)



Most of the clusters architectures are Intel based. The Intel compiler (Ifort) is present in such machines. The Intel MKL library, which stands for Math Kernal Library is a numeical library which is highly useful for Scientists and Engineers.


How to check the Intel architecture (32 bit or 64 bit processor))?

move to $MKL_ROOT/lib/
$MKL_ROOT is usually /opt/lib/intel/MKL/

If you see ia32_lin directory, it is a 32 bit
If you see intel64_lin directory, it is a 64-bit.

IA-32 or compatible
<mkl directory>/lib/ia32_lin
Intel® 64 or compatible
<mkl directory>/lib/intel64_lin

What is the path of the library library files? 

To see just type

which ifort (or the compiler name)

Now the path will be displayed.

/clusterName/intel/Compiler/mkl/..../ifort

Here,

/clusterName/intel/Compiler/mkl/ is mostly set as $MKL_ROOT directory.

so echo $MKL_ROOT will display the path to that directory.

It is very much important to connect these libraries when necessary (while compiling codes)

Intel MKL library also has its ion FFTW3 library which resides inside the */mkl/fftw/ directroy.

This directory contains fftw3.f which is necessary for compiling codes like VASP.

The vasp also needs library files such as libfftw3xf_lintel.a. By default, these are are not compiled. These library files (uncompiled) are present inside /fftw/interfaces/ directory. If you want libfftw3xf_intel.a, you need to go to /mkl/interfaces/fftw3xf/ directory and then need to make using the makefile present there.




To run jobs in specific node in a HPC cluster

Some time, you may want to run a job in a specific folder. For example, to check if all the nodes are working properly after restarting the cluster.

1. Use #PBS -Wall to mention the name

A simplest way is to run as many number of jobs as that of the node at the same time (which can take some time to complete) and use

qstat -n 

to see whether all the nodes are used for the calculation.

To see all the nodes in a cluster, use
pbsnodes -a

cluster
     Mom = headnodename.companyname
     ntype = PBS
     state = free
     pcpus = 24
     resources_available.arch = linux
     resources_available.host = cluster
     resources_available.mem = 264417884kb
     resources_available.ncpus = 24
     resources_available.vnode = cluster
     resources_assigned.accelerator_memory = 0kb
     resources_assigned.mem = 0kb
     resources_assigned.naccelerators = 0
     resources_assigned.ncpus = 0
     resources_assigned.netwins = 0
     resources_assigned.vmem = 0kb
     resv_enable = True
     sharing = default_shared

cn1
     Mom = cn2.aracluster
     ntype = PBS
     state = free
     pcpus = 24
     resources_available.arch = linux
     resources_available.host = cn2
     resources_available.mem = 264424324kb
     resources_available.ncpus = 24
     resources_available.vnode = cn2
     resources_assigned.accelerator_memory = 0kb
     resources_assigned.mem = 0kb
     resources_assigned.naccelerators = 0
     resources_assigned.ncpus = 0
     resources_assigned.netwins = 0
     resources_assigned.vmem = 0kb
     resv_enable = True
     sharing = default_shared

cn2
     Mom = cn1.aracluster
     ntype = PBS
     state = free
     pcpus = 24
     resources_available.arch = linux
     resources_available.host = cn1
     resources_available.mem = 264424336kb
     resources_available.ncpus = 24
     resources_available.vnode = cn1
     resources_assigned.accelerator_memory = 0kb
     resources_assigned.mem = 0kb
     resources_assigned.naccelerators = 0
     resources_assigned.ncpus = 0
     resources_assigned.netwins = 0
     resources_assigned.vmem = 0kb
     resv_enable = True
     sharing = default_shared

Here, cn1, cn2 are the nodenames. The first one 'cluster' is the name of the head node.

Mentions these names in the #PBS option. For example,

#PBS -l nodes=cn1;ncpus=4

This option will run 4 cpus from cn1 node in the cluster.

=======================================================
You can use

cat /etc/hosts

to display the nodenames.

Click here to go back to "Important things to before you work on HPC cluster"


How to find the number of nodes and name of those nodes in a HPC cluster

To display the nodes, use

cat $PBS_NODEFILE

This will display all the details about the cpu and node. 

Or use cat /etc/hosts

This will display all the nodes present in the cluster. 

Important things to before you work on High Performance Computing (HPC) cluster: For Beginners


  1. Why HPC is important and where it is used?
  2. For login, use ssh login and use bitvise ssh client from windows
  3. See the default shell, echo '$0' or echo '$SHELL' 
  4. Which Scheduling software is used by the cluster?
  5. Changing user privileges (chmod)  
  6. Copying files or directories from a cluster to local and vice-versa
  7. List all the compilers in the cluster
  8. List the number nodes in the cluster
  9. To see the OS, type, bit, etc. 
  10. List the active and dead nodes
  11. Submit jobs to PBS que (qsub)
  12. How to submit multiple jobs using qsub?
  13. See the status of the submitted job (qstat)
  14. Terminating a Job (qdel)
  15. Run a job in a specific node
  16. What is mean by compiling serially
  17. What is mean by compiling parallel
  18. Compiling C, C++ and Fortran code using gnu/intel compilers
  19. Using make: configure, make and make install
  20. List of commands need to use sheduler
  21. How to check the size of files/files 
  22. How to check disk space used and free
  23. Working with BLAS library
  24. Working with LAPACK library
  25. Working with OpenMP
  26. Working with Intel Fortran + MKL Libraries
  27. Working with Intel Fortran Compiler Cluster edition 
  28. Working with FFTW library
  29. Fortran usage
  30. C usage
  31. C++ usage
  32. Python usage
  33. Using watch command to see the changes on the screen
  34. List of Scripts to work with HPC, cluster, supercomputer
Note: Add your suggestion on what you want to add here. We will add them in future.

Check the size of the directory and file (and see the largest in terms of memory)

The command du can be used to check the space (disk usage)
To see the disk space
du -sh * 

To see the disk space and sort it
du -sh * | sort -nr 

To see the disk space and see the top 10 files 
du -sh * | sort -nr | head -n10

To see the disk space and see the top 10 files 
du -sh * | sort -nr | tail -n10

Explanation:

du -s *: Summarizes disk usage of all files
sort -nr: To sort numerically, in reverse order
head -n10: Display first 10 results
tail -n10: Displays last 10 results
h : For human-readable output

To list the top 10 largest files from the current directory, use the following command:
du . | sort -nr | head -n10
du -h . | sort -nr | head -n10 (human readable output)
du, sort -nr and head -n10 serves the same purpose.


To check the diskspace with the control of number of recursive directories.

du -h --max-depth=0 | sort -hr   (only current directory)

du -h --max-depth=1 | sort -hr   (Current directory and one directory depth)


Check free memory in the cluster/supercomputer/linux system


Try:
free -g    (to display the free disk size in GB)

You will get like this.

              total        used        free      shared  buff/cache   available
Mem:              7           6           0           0           1           0

Swap:             7           2           5

Options:
 -b, --bytes       show output in bytes
 -k, --kilo          show output in kilobytes
 -m, --mega      show output in megabytes
 -g, --giga          show output in gigabytes
       --tera          show output in terabytes
 -h, --human    show human-readable output
       --si              use powers of 1000 not 1024
 -l, --lohi           show detailed low and high memory statistics
 -t, --total         show total for RAM + swap
 -s N, --seconds N   repeat printing every N seconds
 -c N, --count N     repeat printing N times, then exit
 -w, --wide          wide output

 --help     display this help and exit
 -V or --version  output version information and exit

This is a series of post on "High Performance Computing (HPC): Everything you need to know before working in a Linux Cluster Environment. To see more post on this series, click here: Important Things to Know to Work in Linux Cluster

Linux tips: Run new command with previous arguments


Tip 2:

type history.

All of your recent commands would be listed.

12 ls
13 sudo make && make install
14 cd ~/myapps/

Here, if you want to install the 13nth command (i.e., sudo make && make install)

Use as follow
!13

This will run the 13th command.


Tip 3:

Use ctrl + R to reverse search the history and type any part of the command.
For example, in the above case, if you press ctrl+R and then type sudo, the command "sudo make && make install" will be displayed.
If you want to run that command, press Enter.
Otherwise, presss ctrl+R till you get the relevent command.
If you dont't want to run anything, press ctrl+C.


Here is text.

To use previous argument (only one argument present)

$_ or !$

For example, vi ~/home/application/readme
Now, if you want to cat the file,
you can use
cat !$
Where there are many arguments, you can handle like following.
ls file1.txt file2.txt
and you wanted the first one, you could type
!:1
giving
file1.txt
If you want to use both arguments
!:1-2
This will list both files file1.txt and file2.txt
You can use any argument by mentioning the number of the argument as fillow
!:20-34
!:1-3
To run new command with all the arguments
!:^-$


Extract only the number from a file name

Some times you may want to extract only the number from a file name, especially when you are working with a huge number of files to do some calculation or to run jobs.

This can be done by (for foo_bar_xyz_file_01_input.in)
echo $f|cut -d_ -f5
Here, $f is the file name.
-d is the delimeter
-f5 is the field where the number is present.

Check compiler versions

For any compiler or application, to check the version, just use '''-V" to check the version.

ifort -V



Linux Tips - Reverse search (ctrl +R)

To reverse search the history of commands you used, you can use Ctrl+R and then type any part of the command. For example,

configure && make && make install

Here, if you want to execute this commands again (after several commands)

Just type ctrl+R and type make

If not displayed, type ctrl+R again.

Once your command is displayed, type Enter.

To cancel the suggested command, type ctrl+C (as usually done in Linux terminal)

Suppose, you type the command history and want a slight modification in the 250th command.

!250:p 

If you want to run a command similar to 250th command, but want to modify before running that command, use this format. This will print the command. Then, using up arrow key, you can edit the command and re-run. 

You may be interested in these posts

Error in image file conversion: convert-im6.q16: not authorized `test.eps' @ error/constitute.c/WriteImage/1037.

This error is because of the vulnerability. This allows remote execution of code using image formats. So, some Linux distributions by defaul...