Appendix I Building FHI-aims with a make.sys

This section contains a quick and practical explanation of the main steps using a make.sys file. This how FHI-aims used to be compiled in the past and is here for legacy reasons. Building via cmake is now the firmly recommended first choice and it is not guaranteed that building via make.sys will still work.

1. In the src directory, create a file called make.sys and open it with a text editor. Make sure you did not edit the file called Makefile as provided with the original distribution of FHI-aims if you choose to use and edit the make.sys file (which is recommended).

2. In order to build FHI-aims, you will need to inform the computer about which particular compilers, libraries, optimization flags and possible optional parts of the build process you intend to use. This is the purpose of make.sys. We here only cover a few most important keywords (variables) to be included in make.sys. Many more are available, often documented in the actual Makefile or, if nothing else, in the more detailed Makefile.backend, which controls the detailed pieces of the build process. Note that the syntax, particularly the spaces around the “ = “ signs, in make.sys are important since this file will be included in the Makefile and will have to be read by the make command further below.

3. The following is what a typical make.sys file could look like (see the https://aims-git.rz-berlin.mpg.de wiki for other examples for specific platforms). The explanation of all keywords follows below. Note that this is the copy of make.sys on the author’s (VB’s) laptop. You will need to edit every single variable – the directories to be used on other computers will be different. Blind copying and hoping for the best will not work.

 FC = ifort
 FFLAGS = -O3 -ip -fp-model precise -module $(MODDIR)
 FMINFLAGS = -O0  -fp-model precise -module $(MODDIR)
 F90MINFLAGS = -O0  -fp-model precise -module $(MODDIR)
 F90FLAGS = $(FFLAGS)
 ARCHITECTURE = Generic
 LAPACKBLAS = -L/opt/intel/mkl/lib -I/opt/intel/mkl/include \
              -lmkl_intel_lp64 -lmkl_sequential -lmkl_core
 USE_MPI = yes
 MPIFC = mpif90
 SCALAPACK = /usr/local/scalapack-2.0.2/libscalapack.a
 CC = gcc
 CCFLAGS =
 USE_LIBXC = yes

4. Here is a list of each of these keywords’ meanings:

  • FC : The name of the Fortran compiler you intend to use. This choice is not unimportant. On x86 platforms, Intel Fortran usually produces fast code, whereas other compilers (unfortunately, particularly free compilers such as gfortran) can lead to significantly slower (factor 2-3) FHI-aims runs later.

  • FFLAGS : These are compile-time and linker flags that control the optimization level that the compiler will use. Finding out which optimization level is fastest is worth your time, but note that real-world compilers can have bugs. In the worst case, this can mean numerically wrong results, something you should definitely care about. One way to test the broader correctness of a given FHI-aims build (later) is to run FHI-aims’ regression tests on the computer you intend to use and make sure that all results are marked as correct. For example, for Intel Fortran, -fp-model precise is highly recommended. Unfortunately, we have no way to foresee all possible compiler bugs across all future platforms and compilers – testing is best. Please ask if needed (see Sec. 1.7 for where to find help).

  • FMINFLAGS specifies a lower optimization level for some subroutines that do not need optimization. read_control.f90, the subroutine that reads one of FHI-aims’ main input files, is one such file that does not need high levels of optimization but could take very long to compile if a high optimization level were requested for it.

  • F90MINFLAGS and F90FLAGS are usually just copies of FMINFLAGS and FFLAGS, except for the few compilers (IBM’s xlf) that might treat Fortran .f90 and (legacy) .f files differently.

  • ARCHITECTURE can have multiple meanings, including specific handling of a few compilers’ quirks (the pgi compiler, for example, needs a different call to erf()) and potentially optimization levels for CPU-specific extensions (e.g., AVX - this can be worthwhile). For many purposes, “Generic” is good enough but do take the time to look into CPU-specific optimizations if you intend to run very large, demanding calculations.

  • LAPACKBLAS specifies the locations of numerical linear algebra subroutines, particularly the Basic Linear Algebra Subroutines (BLAS) and the higher-level Lapack subroutines. The location and names of these libraries will vary from computer to computer, but it is VERY important to select well-performing BLAS subroutines for a given computer – the effect on performance will be drastic. An additional item to ensure is that these BLAS libraries should NEVER try to use any internal multithreading (for example, the mkl_sequential library quoted above is inherently single-threaded, which is normally what we want). FHI-aims is already very efficiently parallized for multiple processors. Requesting (say) 16 threads for each of (say) 16 parallel tasks on a parallel computer with 16 physical CPU cores would have the effect of trying to balance 256 threads within the computer, typically slowing execution down to a crawl. With FHI-aims, only ever use only a single thread per parallel task unless you have a special reason and know exactly what you are doing.

  • USE_MPI will make sure that the code knows and will use the process-based Message Passing Interface (MPI) parallelization, which makes sure that FHI-aims can run in parallel both inside a single compute node as well as across a large number of nodes. In later production runs and unless you have a good reason not to do so, always use as many MPI tasks as there are physical processor cores available (no more, no less).

  • MPIFC is the name of the wrapper command that ensures a correct compilation with a given Fortran compiler and a given MPI library. This command (often called mpif90) is also specific to a given computer system and to the installed MPI library.

  • SCALAPACK specifies the location of the library that contains scalapack’s parallel linear algebra subroutines and the so-called basic linear algebra communications (BLACS) subroutines. The author (VB) built his own version of this libary, but usually these subroutines are also supplied with standard linear algebra libraries such as Intel’s Math Kernel Library (mkl).

  • CC is the C compiler to be used.

  • CCFLAGS could house any compiler flags needed for the C compiler. It is not worth doing this for performance reasons (very little impact) but some compilers may need other special instructions to work with Fortran.

  • USE_LIBXC decides whether additional subroutines for exchange correlation functionals, provided in the libxc library, should be used. We are very much indebted to the authors of this library. Please respect their open-source license and cite them if you use their tools.

5. Phew. That was a lot of keywords. But this is computational science, and having a reasonable command of these pieces is worth our while. If you did figure them all out, close the make.sys file and continue to …

6. … build the code by typing make -j scalapack.mpi.

7. Do not despair. If the process above worked well, proceed to try a testrun and then, if you are up for it, the regression tests. If you received an error message during the build (that may well be the case), do not despair – try again and, if needed, seek help. This process is ultimately not rocket science and only a finite amount of pieces are needed. Seek help through one of the channels mentioned in Sec. 1.7 if needed.

8. There are other pieces that can help improve a build on a specific platform. For example, it can be quite desirable to build and link instead to a separate (standalone build) of the ELPA library (high-performance eigenvalue solver) and of the ELSI electronic structure infrastructure. For time and space reasons, this is not covered here presently, but it’s worth investigating these libraries.

In general, the https://aims-git.rz-berlin.mpg.de Wiki is the appropriate place to look for detailed compiler settings for specific platforms. If you have a successful ’make.sys’ file for your own setup, please add it there. The information given in this section is essential as it explains the process, but the platform specific remarks in the Wiki may help you save some time.