.. _supported-codes-label: =================================== Currently supported Community Codes =================================== Introduction ~~~~~~~~~~~~ .. note:: This document is not up-to-date! Here we provide an overview of some currently supported community codes in AMUSE, along with a concise explanation of how each code works. This document serves as an initial guide in finding the code with the highest applicability to a specific astrophysical problem. The supported codes have been sorted according to their astrophysical domain: * :ref:`Dynamics` * :ref:`Evolution` * :ref:`Hydrodynamics` * :ref:`Radiative-Transfer` .. _Dynamics: Stellar Dynamics ~~~~~~~~~~~~~~~~ * BHtree_ * hermite_ * HiGPUs_ * huayno_ * mercury_ * mmc_ * octgrav_ * phiGRAPE_ * smallN_ * twobody_ General ------- The general parameters and methods for the gravitational dynamics are described in :doc:`../reference/stellar_dynamics_interface_specification`. Here we describe the exceptions for the specific codes under "Specifics". Code comparison table --------------------- Note: holds for AMUSE implementation. +---------+---------------------+-----------------+--------+---------+------+---------+--------------+-------------+ |name |approximation scheme |timestep scheme |CPU |GPU |GRAPE |language |stopcond (1) |parallel (2) | | | | | | | | | | | +=========+=====================+=================+========+=========+======+=========+==============+=============+ |bhtree |tree |shared/fixed |Y |N |N |C/C++ |CST |N | | | | | | | | | | | +---------+---------------------+-----------------+--------+---------+------+---------+--------------+-------------+ |bonsai |tree | | | | | | | | | | | | | | | | | | +---------+---------------------+-----------------+--------+---------+------+---------+--------------+-------------+ |fi |tree |block/variable |Y |N |N |FORTRAN |S |N | | | | | | | | | | | +---------+---------------------+-----------------+--------+---------+------+---------+--------------+-------------+ |hermite |direct |shared/variable |Y |N |N |C/C++ |CSOPT |Y | | | | | | | | | | | +---------+---------------------+-----------------+--------+---------+------+---------+--------------+-------------+ |HiGPUs |direct |block time steps |N |Y |N |C/C++ | |Y (on gpus | | | | | | | | | |cluster) | | | | | | | | | | | +---------+---------------------+-----------------+--------+---------+------+---------+--------------+-------------+ |huayno |Approx symplectic | |Y |Y(opencl)|N |C | |N | | | | | | | | | | | +---------+---------------------+-----------------+--------+---------+------+---------+--------------+-------------+ |gadget |tree |individual |Y |N |N |C/C++ |S |Y | | | | | | | | | | | +---------+---------------------+-----------------+--------+---------+------+---------+--------------+-------------+ |mercury |MVS symplectic | |Y |N |N |FORTRAN | |N | | | | | | | | | | | +---------+---------------------+-----------------+--------+---------+------+---------+--------------+-------------+ |octgrav |tree |shared |N |Y |N |C/C++ |S |N | | | | | | | | | | | +---------+---------------------+-----------------+--------+---------+------+---------+--------------+-------------+ |rebound | | | | | | | |N | | | | | | | | | | | +---------+---------------------+-----------------+--------+---------+------+---------+--------------+-------------+ |phigrape |direct |block/variable |Y g6 |Y |Y |FORTAN |CSPT |N | | | | | |sapporo | | | | | +---------+---------------------+-----------------+--------+---------+------+---------+--------------+-------------+ |smallN |direct Hermite 4th |individual |Y |N |N |C/C++ | |N | | |order | | | | | | | | +---------+---------------------+-----------------+--------+---------+------+---------+--------------+-------------+ |twobody |universal variables, |none, exact |Y |N |N |Python | |N | | |Kepler eq. | | | | | | | | | | | | | | | | | | +---------+---------------------+-----------------+--------+---------+------+---------+--------------+-------------+ (1) stopping conditions ==== ========================== code name of stopping condition ==== ========================== C Collision detection E Escaper detection S Number of steps detection O Out of box detection P Pair detection T Timeout detection ==== ========================== (2) Parallel in the following sense: AMUSE uses MPI to communicate with the codes, but for some codes it can be used to parallelize the calculations. Some codes (GPU) are already parallel, however in this table we *do not* refer to that. Codes designated *Y* for parallel can set the number of (parallel) workers, e.g. to set 10 workers for hermite do: .. code-block:: python >>> instance = Hermite(number_of_workers=10) .. _BHtree: BHtree ------ N-body integration module. An implementation of the Barnes & Hut tree code [#bh]_ by Jun Makino `BHTree-code `_. Specifics ######### Parameters ^^^^^^^^^^ +------------------------------------+---------------+---------------+-------------------+ |name |default value |unit |description | | | | | | +====================================+===============+===============+===================+ |use_self_gravity |1 |none |flag for usage of | | | | |self gravity, 1 or | | | | |0 (true or false) | +------------------------------------+---------------+---------------+-------------------+ |timestep |0.015625 |time |time step | | | | | | | | | | | +------------------------------------+---------------+---------------+-------------------+ |epsilon squared |0.125 |length*length |smoothing parameter| | | | |for gravity | | | | |calculations | | | | |>0! | +------------------------------------+---------------+---------------+-------------------+ |ncrit_for_tree |1024 |none |maximum number of p| | | | |articles sharing an| | | | |interaction list | +------------------------------------+---------------+---------------+-------------------+ |opening_angle |0.75 |none |opening angle, | | | | |theta, for building| | | | |the tree: between 0| | | | |and 1 | +------------------------------------+---------------+---------------+-------------------+ |stopping_conditions_number_of_steps |1 |none | | | | | | | | | | | | +------------------------------------+---------------+---------------+-------------------+ |stopping_conditions_timeout |4.0 |seconds | | | | | | | | | | | | +------------------------------------+---------------+---------------+-------------------+ |stopping_conditions_out_of_box_size |0.0 |length | | | | | | | | | | | | +------------------------------------+---------------+---------------+-------------------+ |time |0.0 |time |current | | | | |simulation | | | | |time | +------------------------------------+---------------+---------------+-------------------+ |dt_dia | 1.0 |time |time interval | | | | |between | | | | |diagnostics | | | | |output | +------------------------------------+---------------+---------------+-------------------+ .. automodule:: amuse.community.bhtree.interface .. autoclass:: BHTree .. autoparametersattribute:: parameters example .. code-block:: python >>> from amuse.community.bhtree.interface import BHTreeInterface, BHTree >>> from amuse.units import nbody_system >>> instance = BHTree(BHTree.NBODY) >>> instance.parameters.epsilon_squared = 0.00001 | nbody_system.length**2 .. [#bh] Barnes, J. & Hut, P. 1986. *Nature* **324**, 446. .. _hermite: hermite -------- Time-symmetric N-body integration module with shared but variable time step (the same for all particles but its size changing in time), using the Hermite integration scheme [#hutmakino]_. See also : `ACS `_ Specifics ######### Parameters ^^^^^^^^^^ +------------------------------------+---------------+---------------+-------------------+ |name |default value | unit | description | | | | | | | | | | | +====================================+===============+===============+===================+ |pair factor |1.0 | none |radius factor | | | | |for pair detec | | | | |tion | +------------------------------------+---------------+---------------+-------------------+ |dt_param |0.03 |none |timestep | | | | |scaling factor | | | | | | +------------------------------------+---------------+---------------+-------------------+ |epsilon squared |0.0 |length*length |smoothing parameter| | | | |for gravity | | | | |calculations | | | | | | +------------------------------------+---------------+---------------+-------------------+ |stopping_conditions_number_of_steps |1 |none | | | | | | | | | | | | +------------------------------------+---------------+---------------+-------------------+ |stopping_conditions_timeout |4.0 |seconds | | | | | | | | | | | | +------------------------------------+---------------+---------------+-------------------+ |stopping_conditions_out_of_box_size |0.0 |length | | | | | | | | | | | | +------------------------------------+---------------+---------------+-------------------+ |time |0.0 |time |current | | | | |simulation | | | | |time | +------------------------------------+---------------+---------------+-------------------+ |dt_dia | 1.0 |time |time interval | | | | |between | | | | |diagnostics | | | | |output | +------------------------------------+---------------+---------------+-------------------+ .. automodule:: amuse.community.hermite.interface .. autoclass:: Hermite .. autoparametersattribute:: parameters .. [#hutmakino] Hut, P., Makino, J. & McMillan, S., 1995, *ApJL* **443**, L93. .. _phiGRAPE: phiGRAPE -------- phiGRAPE is a direct N-body code optimized for running on a parallel GRAPE cluster. See Harfst et al. [#harfst]_ for more details. The Amusean version is capable of working on other platforms as well by using interfaces that mimic GRAPE hardware. * Sapporo Sapporo is a library that mimics the behaviour of GRAPE hardware and uses the GPU to execute the force calculations [#gaburov]_. * Sapporo-light This version of Sapporo is without multi-threading support and does not need C++. This makes it easier to integrate into fortran codes, but beware, it can only use one GPU device per application! * g6 Library which mimics the behavior of GRAPE and uses the CPU. Lowest on hw requirements. Specifics ######### Hardware modes ^^^^^^^^^^^^^^ Parameters ^^^^^^^^^^ Amuse tries to build all implementations at compile time. In the phiGRAPE interface module the preferred mode can be selected whith the mode parameter: * MODE_G6LIB = 'g6lib' Just make it work, no optimizations, no special hw requirements * MODE_GPU = 'gpu' Using sapporo, CUDA needed. * MODE_GRAPE = 'grape' Using GRAPE hw. * MODE_PG = 'pg' Phantom grape, optimized for x86_64 processors .. code-block:: python >>> from amuse.community.phigrape.interface import PhiGRAPEInterface, PhiGRAPE >>> instance = PhiGRAPE(PhiGRAPE.NBODY, PhiGRAPEInterface.MODE_GPU) The default is **MODE_G6LIB**. +------------------------------------+---------------+---------------+-------------------+ |name |default value | unit | description | | | | | | | | | | | +====================================+===============+===============+===================+ |initialize_gpu_once |0 |none |set to 1 if the gpu| | | | |must only be | | | | |initialized once, 0| | | | |if it can be | | | | |initialized for | | | | |every call\nIf you | | | | |want to run | | | | |multiple instances | | | | |of the code on the | | | | |same gpu this | | | | |parameter needs to | | | | |be 0 (default) | +------------------------------------+---------------+---------------+-------------------+ |initial_timestep_parameter |0.0 |none |parameter to | | | | |determine the | | | | |initial timestep | +------------------------------------+---------------+---------------+-------------------+ |timestep_parameter |0.0 |none | | | | | | | | | | | | +------------------------------------+---------------+---------------+-------------------+ |epsilon squared |0.0 |length*length |smoothing parameter| | | | |for gravity | | | | |calculations | | | | |>0! | +------------------------------------+---------------+---------------+-------------------+ |stopping_conditions_number_of_steps |1 |none | | | | | | | | | | | | +------------------------------------+---------------+---------------+-------------------+ |stopping_conditions_timeout |4.0 |seconds | | | | | | | | | | | | +------------------------------------+---------------+---------------+-------------------+ |stopping_conditions_out_of_box_size |0.0 |length | | | | | | | | | | | | +------------------------------------+---------------+---------------+-------------------+ .. automodule:: amuse.community.phigrape.interface .. autoclass:: PhiGRAPE .. autoparametersattribute:: parameters .. code-block:: python >>> instance.timestep_parameter = 0.1 |nbody_system.time .. [#harfst] Harfst, S., Gualandris, A., Merritt, D., Spurzem, R., Portegies Zwart, S., & Berczik, P. 2006, *NewAstron.* **12**, 357-377. .. [#gaburov] Gaburov, E., Harfst, S., Portegies Zwart, S. 2009, *NewAstron.* **14** 630-637. .. _twobody: twobody ------- Semi analytical code based on Kepler [#bate]_. The particle set provided has length one or two. If one particle is given, the mass is assigned to a particle in the origin and the phase-coordinates are assigned to the other particle. This is usefull when *m1* >> *m2*. Specifics ######### Parameters ^^^^^^^^^^ .. [#bate] Bate, R.R, Mueller, D.D., White, J.E. "FUNDAMENTALS OF ASTRODYNAMICS" *Dover* 0-486-60061-0 .. _smallN: smallN ------ Interface to the Kira Small-N Integrator and Kepler modules from Starlab. https://www.sns.ias.edu/~starlab/ You will need to download Starlab from the above site, make it, install it, and then set the STARLAB_INSTALL_PATH variable to be equal to the installation directory (typically something like ~/starlab/usr). Starlab is available under the GNU General Public Licence (version 2), and is developed by: * Piet Hut * Steve McMillan * Jun Makino * Simon Portegies Zwart Other Starlab Contributors: * Douglas Heggie * Kimberly Engle * Peter Teuben Specifics ######### Parameters ^^^^^^^^^^ +------------------------------------+---------------+---------------+-------------------+ |name |default value | unit | description | | | | | | | | | | | +------------------------------------+---------------+---------------+-------------------+ |epsilon squared |0.0 |length*length |smoothing parameter| | | | |for gravity | | | | |calculations | | | | |>0! | +------------------------------------+---------------+---------------+-------------------+ |number_of_particles |0.0 |none | | | | | | | | | | | | +------------------------------------+---------------+---------------+-------------------+ Methods ^^^^^^^ .. _octgrav: Octgrav ------- Tree-code which runs on GPUs with NVIDIA CUDA architecture. [#oct]_ Specifics ######### Parameters ^^^^^^^^^^ +------------------------------------+---------------+---------------+-------------------+ |name |default value | unit | description | | | | | | | | | | | +====================================+===============+===============+===================+ |opening_angle |0.8 |none |opening angle for | | | | |building the tree | | | | |between 0 and 1 | +------------------------------------+---------------+---------------+-------------------+ |timestep |0.01 |time |constant timestep | | | | |for iteration | | | | | | | | | | | +------------------------------------+---------------+---------------+-------------------+ |epsilon squared |0.01 |length*length |smoothing parameter| | | | |for gravity | | | | |calculations | | | | | | +------------------------------------+---------------+---------------+-------------------+ |stopping_conditions_number_of_steps |1 |none | | | | | | | | | | | | +------------------------------------+---------------+---------------+-------------------+ |stopping_conditions_timeout |4.0 |seconds | | | | | | | | | | | | +------------------------------------+---------------+---------------+-------------------+ |stopping_conditions_out_of_box_size |0.0 |length | | | | | | | | | | | | +------------------------------------+---------------+---------------+-------------------+ .. automodule:: amuse.community.octgrav.interface .. autoclass:: Octgrav .. autoparametersattribute:: parameters .. [#oct] Gaburov, E., Bedorf, J., Portegies Zwart S., 2010, "Gravitational tree-code on graphics processing units: implementations in CUDA", *ICCS* Specifics ######### Parameters ^^^^^^^^^^ +------------------------------------+---------------+---------------+-------------------+ |name |default value | unit | description | | | | | | | | | | | +====================================+===============+===============+===================+ |opening_angle |0.8 |none |opening angle for | | | | |building the tree | | | | |between 0 and 1 | +------------------------------------+---------------+---------------+-------------------+ |timestep |0.01 |time |constant timestep | | | | |for iteration | | | | | | | | | | | +------------------------------------+---------------+---------------+-------------------+ |epsilon squared |0.01 |length*length |smoothing parameter| | | | |for gravity | | | | |calculations | | | | | | +------------------------------------+---------------+---------------+-------------------+ |stopping_conditions_number_of_steps |1 |none | | | | | | | | | | | | +------------------------------------+---------------+---------------+-------------------+ |stopping_conditions_timeout |4.0 |seconds | | | | | | | | | | | | +------------------------------------+---------------+---------------+-------------------+ |stopping_conditions_out_of_box_size |0.0 |length | | | | | | | | | | | | +------------------------------------+---------------+---------------+-------------------+ .. _mercury: mercury ------- Mercury is a general-purpose N-body integration package for problems in celestial mechanics. This package contains some subroutines taken from the Swift integration package by H.F.Levison and M.J.Duncan (1994) Icarus, vol 108, pp18. Routines taken from Swift have names beginning with `drift` or `orbel`. The standard symplectic (MVS) algorithm is described in J.Widsom and M.Holman (1991) Astronomical Journal, vol 102, pp1528. The hybrid symplectic algorithm is described in J.E.Chambers (1999) Monthly Notices of the RAS, vol 304, pp793. Currently Mercury has an interface that differs from the other grav dyn interfaces. The class is called MercuryWayWard and will loose its predicate once the interface is standardized (work in progress). It handles two kinds of particles: centre_particle and orbiters. The centre_particle is restricted to contain only one particle and it should be the heaviest and much heavier than the orbiters. Its at the origin in phase space. Appart from the usual phase space coordinate, particles have a spin in mercury and the centre particle has oblateness parameters expressed in moments 2, 4 and 6. Orbiters have density. Furthermore, mercury does not use nbody units but instead units as listed in table. Central particle ################ +--------------+-----------+------------------+ |mass | |MSun | +--------------+-----------+------------------+ |radius | |AU | +--------------+-----------+------------------+ |oblateness |j2, j4, j6 |AU^2 etc. | | | | | | | | | +--------------+-----------+------------------+ |angluar | |MSun AU ^2 day ^-1| |momentum | | | +--------------+-----------+------------------+ Orbiters ######## +--------------+-----------+-----------+------------------+ |mass | | |MSun | +--------------+-----------+-----------+------------------+ |density | | |g/cm^3 | +--------------+-----------+-----------+------------------+ |position | |x, y, z |AU | | | | | | | | | | | +--------------+-----------+-----------+------------------+ |velocity | |vx, vy, vz |AU/day | | | | | | +--------------+-----------+-----------+------------------+ |celimit |close | |Hill radii, but | | |encounters | |units.none in | | | | |amuse | +--------------+-----------+-----------+------------------+ |angluar | |Lx, Ly, Lz |MSun AU ^2 day ^-1| |momentum | | | | +--------------+-----------+-----------+------------------+ .. automodule:: amuse.community.mercury.interface .. autoclass:: MercuryWayWard .. autoparametersattribute:: parameters .. _huayno: Huayno ------ Hierarchically split-Up Approximately sYmplectic N-body sOlver (HUAYNO) Inti Pelupessy - january 2011 short description ################# HUAYNO is a code to solve the astrophysical N-body problem. It uses recursive Hamiltonian splitting to generate multiple-timestep integrators which conserve momentum to machine precision. A number of different integrators are available. The code has been developed within the AMUSE environment. It can make use of GPUs - for this an OpenCL version can be compiled. Use ### Use of the code is the same as any gravity the code within AMUSE. There are three parameters for the code: Smoothing length (squared) eps2, timestep parameter eta and a parameter to select the integrator: timestep_parameter( eta ): recommended values eta=0.0001 - 0.05 epsilon_squared (eps2) : eps2 can be zero or non-zero inttype_parameter( inttype ): possible values for inttype are described below. Miscellaneous: - The code assumes G=1, - Collisions are not implemented (needs rewrite), - workermp option can be used for OpenMP parallelization, - The floating point precision of the calculations can be changed by setting the FLOAT and DOUBLE definitions in evolve.h. FLOAT sets the precision of the calculations, DOUBLE the precision of the position and velocity reductions. They can be set to e.g. float, double, long double or __float128 It is advantageous to choose set DOUBLE at a higher precision than FLOAT. recommended is the combination: double/ long double - the AMUSE interface uses double precision - It is unlikely the integer types in evolve.h would need to be changed (INT should be able to hold particle number, LONG should be able to hold interaction counts) OpenCL operation: By compiling worker_cl it is possible to offload the force and timestep loops to the GPU. The implementation is based on the STDCL library (www.browndeertechnology.com) so this library should be compiled first. In the Makefile the corresponding directories should point to the installation directory of STDCL. The define in evolce_cl.h should be set appropiately for the OpenCL configuration: CLCONTEXT: stdgpu or stdcpu NTHREAD: 64 for GPU, 2 for CPU (actually 64 will also work for CPU just fine) BLOCKSIZE: number of particles stored in local memory (64 good starting guess) evolve_kern.cl contains the OpenCL kernels. Precision of the calculation is controlled by FLOAT(4) defines in evolve_kern.cl and CLFLOAT(4) in evolve.h. They should agree with each other (i.e. float and cl_float or double and cl_double) .. _mmc: Monte Carlo ----------- Giersz, M. 1998, MNRAS, 298, 1239 Giersz, M. 2001, MNRAS, 324, 218 Giersz, M. 2006, MNRAS, 371, 484 Specifics ######### parameters ^^^^^^^^^^ +--------+------------------------------------------+----+ |name | description | | +========+==========================================+====+ |irun |initial sequence of random numbers | | | | | | | | | | | | | | +--------+------------------------------------------+----+ |nt |total number o f objects (sta rs and | | | |binarie s) at T=0 ns - number of single | | | |tars, nb - number o f binaries, nt ns+nb, | | | |nss - number of star s(nss = nt+nb) | | +--------+------------------------------------------+----+ |istart |1 - initial model, .ne.1 - restart | | +--------+------------------------------------------+----+ |ncor |number of stars to calculate the central | | | |parameters | | +--------+------------------------------------------+----+ |nmin |minimum number of stars to calculate the | | | |central parameters | | +--------+------------------------------------------+----+ |nz0 |number of stars in each zone at T=0 | | +--------+------------------------------------------+----+ |nzonc |minimum number of zones in the core | | +--------+------------------------------------------+----+ |nminzo |minimum number of stars in a zone | | +--------+------------------------------------------+----+ |ntwo |maximum index of 2 | | +--------+------------------------------------------+----+ |imodel |initial model: 1- uniform & isotropic, 2- | | | |Plummer, 3- King, 4 - M67 | | +--------+------------------------------------------+----+ |iprint |0- full diagnostic information, 1- | | | |diagnostic info. suppressed | | +--------+------------------------------------------+----+ |ib3f |1 - Spitzer's, 2 - Heggie's formula for | | | |three-body binary interaction with field | | | |stars, 3 - use Pmax for interaction * | | | |probability 4 - three- and four-body | | | |numerical integration | | | | | | +--------+------------------------------------------+----+ |iexch |0 - no exchange in any interactions, 1 - | | | |exchange only in binary field star | | | |interacions, 2 - exchange in all | | | |interactions (binary - field and binary - | | | |binary) | | +--------+------------------------------------------+----+ |tcrit |termination time in units of the crossing | | | |time | | +--------+------------------------------------------+----+ |tcomp |maximum computing time in hours | | +--------+------------------------------------------+----+ |qe |energy tolerance | | +--------+------------------------------------------+----+ |alphal |power-law index for initial mass function | | | |for masses smaller than breake mass: -1 - | | | |equal mass case | | +--------+------------------------------------------+----+ |alphah |power-law index for initial mass function | | | |for masses greater than breake mass. If | | | |alphal=alphah the IMF does not have a | | | |break | | +--------+------------------------------------------+----+ |brakem |the mass in which the IMF is broken. If | | | |brakem is smaller * than the minimum mass | | | |(bodyn) than the break mass is as for * | | | |the Kroupa mass function (brakem = 0.5 Mo)| | | | | | +--------+------------------------------------------+----+ |body1 |maximum particle mass before scaling | | | |(solar mass) | | +--------+------------------------------------------+----+ |bodyn |minimum particle mass before scaling | | | |(solar mass) | | +--------+------------------------------------------+----+ |fracb |primordial binary fraction by number. nb =| | | |fracb*nt, * ns = (1 - fracb)*nt, nss = (1 | | | |+ fracb)*nt * fracb > 0 - primordial | | | |binaries * fracb = 0 - only dynamical | | | |binaries | | +--------+------------------------------------------+----+ |amin |minimum semi-major axis of binaries (in | | | |sollar units) * = 0 then amin = 2*(R1+R2),| | | |> 0 then amin = amin | | +--------+------------------------------------------+----+ |amax |maximum semi-major axis of binaries (in | | | |sollar units) | | +--------+------------------------------------------+----+ |qvir |virial ratio (qvir = 0.5 for equilibrium) | | +--------+------------------------------------------+----+ |rbar |tidal radius in pc, halfmass radius in pc | | | |for isolated * cluster. No scaling - rbar | | | |= 1 | | +--------+------------------------------------------+----+ |zmbar |total mass of the cluster in sollar mass, | | | |* no scaling zmbar = 1 | | +--------+------------------------------------------+----+ |w0 |king model parameter | | +--------+------------------------------------------+----+ |bmin |minimum value of sin(beta^2/2) | | +--------+------------------------------------------+----+ |bmax |maximum value of sin(beta^2/2) | | +--------+------------------------------------------+----+ |tau0 |time step for a complite cluster model | | +--------+------------------------------------------+----+ |gamma |parameter in the Coulomb logarithm | | | |(standard value = 0.11) | | +--------+------------------------------------------+----+ |xtid |coeficient in the front of cluster tidal | | | |energy: * -xtid*smt/rtid | | +--------+------------------------------------------+----+ |rplum |for M67 rtid = rplum*rsplum (rsplum - | | | |scale radius for * plummer model) | | +--------+------------------------------------------+----+ |dttp |time step (Myr) for profile output | | +--------+------------------------------------------+----+ |dtte |time step (Myr) for mloss call for all | | | |objects | | +--------+------------------------------------------+----+ |dtte0 |time step (Myr) for mloss call for all | | | |objects for tphys * less then tcrevo. For | | | |tphys greater then tcrevo time step * is | | | |eqiual to dtte | | +--------+------------------------------------------+----+ |tcrevo |critical time for which time step for | | | |mloss call changes from * dtte0 to dtte | | +--------+------------------------------------------+----+ |xtau |call mloss for a particlular object when *| | | |(uptime(im1) - olduptime(im1))/tau/tscale | | | |< xtau | | +--------+------------------------------------------+----+ |ytau |multiplication of tau0 (tau = ytau*tau0) | | | |after time * greater than tcrevo | | +--------+------------------------------------------+----+ |ybmin |multiplication of bmin0 (bmin = | | | |ybmin*bmin0) after time * greater than | | | |tcrevo | | +--------+------------------------------------------+----+ |zini |initial metalicity (solar z = 0.02, | | | |globular clusters * M4 - z = 0.002, | | | |NGC6397 - z = 0.0002) | | +--------+------------------------------------------+----+ |ikroupa |0 - the initial binary parameters are | | | |picked up * according Kroupa's | | | |eigenevolution and feeding algorithm * | | | |(Kroupa 1995, MNRAS 277, 1507) * 1 - the | | | |initial binary parameters are picked as | | | |for M67 * model (Hurley et al. 2005) | | +--------+------------------------------------------+----+ |iflagns |0 - no SN natal kiks for NS formation, 1 -| | | |SN natal kicks only for single NS | | | |formation, 2 - SN natal kick for single NS| | | |formation and NS formation in binaries | | +--------+------------------------------------------+----+ |iflagbh |0 - no SN natal kiks for BH formation, 1 -| | | |SN natal kicks * only for single BH | | | |formation, 2 - SN natal kick for single * | | | |BH formation and BH formation in binaries | | +--------+------------------------------------------+----+ |nitesc |0 - no iteration of the tidal radius and | | | |induced mass loss * due to stellar | | | |evolution, 1 - iteration of the tidal | | | |radius * and induced mass loss due to | | | |stellar evolution | | +--------+------------------------------------------+----+ .. automodule:: amuse.community.mmc.interface .. autoclass:: mmc HiGPUs -------- HiGPUs is a parallel direct N-body code based on a 6th order Hermite integrator. The code has been developed by Capuzzo-Dolcetta, Punzo and Spera (Dep. of Physics, Sapienza, Univ. di Roma; see astrowww.phys.uniroma1.it/dolcetta/HPCcodes/HiGPUs.html) and uses, at the same time, MPI, OpenMP and CUDA libraries to fully exploit all the capabilities offered by hybrid supercomputing platforms. Moreover, it is implemented using block time steps (individual time stepping) such to be able to deal with stiff problems like highly collisional gravitational N-body problems. Specifics ######### Parameters ^^^^^^^^^^ +------------------------------------+---------------+---------------+-------------------+ |name |default value | unit | description | | | | | | | | | | | +====================================+===============+===============+===================+ |eta_6 |0.4 |none |eta parameter for | | | | |determining stars | | | | |time steps | +------------------------------------+---------------+---------------+-------------------+ |eta_4 |0.01 |none |eta parameter for | | | | |initializing blocks| | | | | | +------------------------------------+---------------+---------------+-------------------+ |eps |0.001 |length |smoothing parameter| | | | |for gravity | | | | |calculations | +------------------------------------+---------------+---------------+-------------------+ |r_scale_galaxy |0.0 |length |scale radius for | | | | |analytical galaxy | | | | |potential | +------------------------------------+---------------+---------------+-------------------+ |mass_galaxy |0.0 |mass |total mass for | | | | |analytical galaxy | | | | |potential | +------------------------------------+---------------+---------------+-------------------+ |r_core_plummer |0.0 |length |core radius for | | | | |analytical plummer | | | | |potential | +------------------------------------+---------------+---------------+-------------------+ |mass_plummer |0.0 |mass |total mass for | | | | |analytical plummer | | | | |potential | +------------------------------------+---------------+---------------+-------------------+ |start_time |0.0 |time |initial simulation | | | | |time | | | | | | +------------------------------------+---------------+---------------+-------------------+ |min_step |-30.0 |none |exponent which defi| | | | |nes the minimum | | | | |time step allowed | | | | |for stars | | | | |(2^exponent) | +------------------------------------+---------------+---------------+-------------------+ |max_step |-3.0 |none |exponent which defi| | | | |nes the maximum | | | | |time step allowed | | | | |for stars | | | | |(2^exponent) | +------------------------------------+---------------+---------------+-------------------+ |n_Print |1000000 |none |maximum number of | | | | |snapshots | | | | | | +------------------------------------+---------------+---------------+-------------------+ |dt_Print |1.0 |time |time interval | | | | |between | | | | |diagnostics | | | | |output | +------------------------------------+---------------+---------------+-------------------+ |n_gpu |2 |none |number of GPUs per | | | | |node | | | | | | +------------------------------------+---------------+---------------+-------------------+ |gpu_name |GeForce GTX 480|none |GPUs to use | | | | | | | | | | | +------------------------------------+---------------+---------------+-------------------+ |Threads |128 |none |number of gpus | | | | |threads per block | | | | | | +------------------------------------+---------------+---------------+-------------------+ |output_path_name |../../test_re |none |path where HiGPUs | | |sults/ | |output will be | | | | |stored | +------------------------------------+---------------+---------------+-------------------+ For more information about parameters check the readme file in the docs folder. These are the maximum performance (in Gflops) reached using different single GPUs installed, one at a time, on a workstation equipped with 2 CPUs Intel Xeon X5650, 12 GB of ECC RAM memory 1333 MHz, Ubuntu Lucid 10.04 x86_64, motherboard Supermicro X8DTG-QF: * TESLA C1060 : 107 * TESLA C2050 : 395 * TESLA M2070 : 391 * GeForce GTX 480 : 265 * GeForce GTX 580 : 311 To use the code with AMP GPUs you can download the OpenCL version from the website .. automodule:: amuse.community.higpus.interface .. autoclass:: HiGPUs .. autoparametersattribute:: parameters .. _Evolution: Stellar Evolution ~~~~~~~~~~~~~~~~~ * bse_ * evtwin_ * mesa_ * seba_ * sse_ .. _sse: sse --- Stellar evolution is performed by the **rapid** single-star evolution (SSE) algorithm. This is a package of **analytical formulae** fitted to the detailed models of Pols et al. (1998) that covers all phases of evolution from the zero-age main-sequence up to and including remnant phases. It is valid for masses in the range 0.1-100 Msun and metallicity can be varied. The SSE package contains a prescription for mass loss by stellar winds. It also follows the evolution of rotational angular momentum for the star. Full details can be found in the SSE paper: * "Comprehensive analytic formulae for stellar evolution as a function of mass and metallicity" Hurley J.R., Pols O.R., Tout C.A., 2000, MNRAS, 315, 543 =========== ====== ==== ========================= .. min max unit =========== ====== ==== ========================= Mass 0.1 100 Msun Metallicity 0.0001 0.03 fraction (0.02 is solar) =========== ====== ==== ========================= .. automodule:: amuse.community.sse.interface .. autoclass:: SSE .. autoparametersattribute:: parameters .. _bse: bse --- Binary evolution is performed by the **rapid** binary-star evolution (BSE) algorithm. Circularization of eccentric orbits and synchronization of stellar rotation with the orbital motion owing to tidal interaction is modelled in detail. Angular momentum loss mechanisms, such as gravitational radiation and magnetic braking, are also modelled. Wind accretion, where the secondary may accrete some of the material lost from the primary in a wind, is allowed with the necessary adjustments made to the orbital parameters in the event of any mass variations. Mass transfer also occurs if either star fills its Roche lobe and may proceed on a nuclear, thermal or dynamical time-scale. In the latter regime, the radius of the primary increases in response to mass-loss at a faster rate than the Roche-lobe of the star. Stars with deep surface convection zones and degenerate stars are unstable to such dynamical time-scale mass loss unless the mass ratio of the system is less than some critical value. The outcome is a common-envelope event if the primary is a giant star. This results in merging or formation of a close binary, or a direct merging if the primary is a white dwarf or low-mass main-sequence star. On the other hand, mass transfer on a nuclear or thermal time-scale is assumed to be a steady process. Prescriptions to determine the type and rate of mass transfer, the response of the secondary to accretion and the outcome of any merger events are in place in BSE and the details can be found in the BSE paper: * "Evolution of binary stars and the effect of tides on binary populations" Hurley J.R., Tout C.A., & Pols O.R., 2002, MNRAS, 329, 897 * "Comprehensive analytic formulae for stellar evolution as a function of mass and metallicity" Hurley J.R., Pols O.R., Tout C.A., 2000, MNRAS, 315, 543 ============ ====== ==== ========================= .. min max unit ============ ====== ==== ========================= Mass 0.1 100 Msun Metallicity 0.0001 0.03 fraction (0.02 is solar) Period all all Eccentricity 0.0 1.0 ============ ====== ==== ========================= .. automodule:: amuse.community.bse.interface .. autoclass:: BSE .. autoparametersattribute:: parameters .. _seba: seba ---- Single-star and binary evolution is performed by the **rapid** stellar evolution algorithm SeBa. Stars are evolved from the zero-age main sequence until remnant formation and beyond. Single stellar evolution is modeled with **analytical formulae** based on fits to detailed single star tracks at different metallicities (Hurley, Pols & Tout, 2000, 315, 543). Stars are parametrised by mass, radius, luminosity, core mass, etc. as functions of time and initial mass. Mass loss from winds, which is substantial e.g. for massive stars and post main-sequence stars, is included. Furthermore, the SeBa package contains an algorithm for **rapid** binary evolution calculations. Binary interactions such as wind accretion, tidal interaction and angular momentum loss through (wind) mass loss, magnetic braking, or gravitational radiation are taken into account at every timestep with appropriate recipes. The stability and rate of mass transfer are dependent on the reaction to mass change of the stellar radii and the corresponding Roche lobes. If the mass transfer takes place on the dynamical timescale of the donor star, the mass transfer becomes quickly unstable, and a common-envelope phase follows. If mass transfer occurs in a stable way, SeBa models the response of the companion star and possible mass loss and angular momentum loss at every timestep. After mass transfer ceases in a binary system, the donor star turns into a remnant or a helium-burning star without a hydrogen envelope. When instead, the mass transfer leads to a merger between the binary stars, the resulting stellar product is estimated and the subsequent evolution is followed. More information on SeBa can be found in the papers: Relevant papers: * "Population synthesis of high-mass binaries" Portegies Zwart, S.F., Verbunt, F. 1996, 309, 179P * "Population synthesis for double white dwarfs . I. Close detached systems" Nelemans, G., Yungelson, L.R., Portegies Zwart, S.F., Verbunt, F. 2001, 365, 491N * "Supernova Type Ia progenitors from merging double white dwarfs. Using a new population synthesis model" Toonen, S., Nelemans, G., Portegies Zwart, S. 2012, 546A, 70T * "The effect of common-envelope evolution on the visible population of post-common-envelope binaries" Toonen, S., Nelemans, G. 2013, 557A, 87T ============ ====== ==== ========================= .. min max unit ============ ====== ==== ========================= Mass 0.1 100 Msun Metallicity 0.0001 0.03 fraction (0.02 is solar) Period all all Eccentricity 0.0 1.0 ============ ====== ==== ========================= .. automodule:: amuse.community.seba.interface .. autoclass:: SeBa .. autoparametersattribute:: parameters .. _evtwin: evtwin ------ Evtwin is based on Peter Eggleton's stellar evolution code, and actually solves the differential equations that apply to the interior of a star. Therefore it is more accurate, but also much slower than the analytic fits-based sse_ and seba_ algorithm explained above. Binaries are not yet supported in the AMUSE interface to evtwin, neither is the work-around for the helium flash. Currently only solar metallicity. Relevant papers: * "The evolution of low mass stars" Eggleton, P.P. 1971, MNRAS, 151, 351 * "Composition changes during stellar evolution" Eggleton, P.P. 1972, MNRAS, 156, 361 * "A numerical treatment of double shell source stars" Eggleton, P.P. 1973, MNRAS, 163, 279 * "An Approximate Equation of State for Stellar Material" Eggleton, P.P., Faulkner, J., & Flannery, B.P. 1973, A&A, 23, 325 * "A Possible Criterion for Envelope Ejection in Asymptotic Giant Branch or First Giant Branch Stars" Han, Z., Podsiadlowski, P., & Eggleton, P.P. 1994, MNRAS, 270, 121 * "Approximate input physics for stellar modelling" Pols, O.R., Tout, C.A., Eggleton, P.P., & Han, Z. 1995, MNRAS, 274, 964 * "The Braking of Wind" Eggleton, P.P. 2001, Evolution of Binary and Multiple Star Systems, 229, 157 * "A Complete Survey of Case A Binary Evolution with Comparison to Observed Algol-type Systems" Nelson, C.A., & Eggleton, P.P. 2001, ApJ, 552, 664 * "The Evolution of Cool Algols" Eggleton, P.P., & Kiseleva-Eggleton, L. 2002, ApJ, 575, 461 * For thermohaline mixing: Stancliffe, Glebbeek, Izzard & Pols, 2007 A&A * For the OPAL 1996 opacity tables: Eldridge & Tout, 2004 MNRAS 348 * For enhancements to the solver: Glebbeek, Pols & Hurley, 2008 A&A .. automodule:: amuse.community.evtwin.interface .. autoclass:: EVtwin .. autoparametersattribute:: parameters .. _mesa: mesa ---- The software project MESA (Modules for Experiments in Stellar Astrophysics, ``_), aims to provide state-of-the-art, robust, and efficient open source modules, usable singly or in combination for a wide range of applications in stellar astrophysics. Since the package is rather big (about 800 MB download, >2 GB built), this community code is optional and does not install automatically. Set the environment variable DO_INSTALL_MESA and run `make` to download and install it. The AMUSE interface to MESA can create and evolve stars using the MESA/STAR module. If you order a metallicity you haven't used before, starting models will be computed automatically and saved in the `mesa/src/data/star_data/starting_models` directory (please be patient...). All metallicities are supported, even the interesting case of Z=0. The supported stellar mass range is from about 0.1 to 100 Msun. References: * Paxton, Bildsten, Dotter, Herwig, Lesaffre & Timmes 2010, ApJS submitted, arXiv:1009.1622 * ``_ .. automodule:: amuse.community.mesa.interface .. autoclass:: MESA .. autoparametersattribute:: parameters .. _Hydrodynamics: Hydrodynamics ~~~~~~~~~~~~~ * athena_ (grid code) * capreole_ (grid code) * fi_ (N-body/SPH code) * gadget2_ (N-body/SPH code) .. _athena: athena -------- Athena is a grid-based code for astrophysical hydrodynamics. Athena can solve magnetohydrodynamics (MHD) as well, but this is currently not supported from AMUSE. It was developed primarily for studies of the interstellar medium, star formation, and accretion flows. The current version (Athena v4.0) implements algorithms for the following physics: * compressible hydrodynamics and MHD in 1D, 2D, and 3D, * ideal gas equation of state with arbitrary γ (including γ = 1, an isothermal EOS), * an arbitrary number of passive scalars advected with the flow, * self-gravity, and/or a static gravitational potential, * Ohmic resistivity, ambipolar diffusion, and the Hall effect, * both Navier-Stokes and anisotropic (Braginskii) viscosity, * both isotropic and anisotropic thermal conduction, * optically-thin radiative cooling. In addition, Athena allows for the following grid and parallelization options: * Cartesian or cylindrical coordinates, * static (fixed) mesh refinement, * shearing-box source terms, and an orbital advection algorithm for MHD, * parallelization using domain decomposition and MPI. A variety of choices are also available for the numerical algorithms, such as different Riemann solvers and spatial reconstruction methods. The relevant references are: * Gardiner & Stone 2005, JCP, 205, 509 (2D JCP Method) * Gardiner & Stone 2007, JCP, 227, 4123 (3D JCP Method) * Stone et al. 2008, ApJS, 178, 137 (Method) * Stone & Gardiner 2009, NewA, 14, 139 (van Leer Integrator) * Skinner & Ostriker 2010, ApJ, 188, 290 (Cylindrical Integrator) * Stone & Gardiner 2010, ApJS, 189, 142 (Shearing Box Method) .. automodule:: amuse.community.athena.interface .. autoclass:: Athena .. autoparametersattribute:: parameters .. _capreole: capreole -------- Capreole is a grid-based astrophysical hydrodynamics code developed by Garrelt Mellema. It works in one, two dimensions, and three spatial dimensions and is programmed in Fortran 90. It is parallelized with MPI. For the hydrodynamics it relies on the Roe-Eulderink-Mellema (REM) solver, which is an approximate Riemann solver for arbitrary metrics. It can solve different hydrodynamics problems. Capreole has run on single processors, but also on massively parallel systems (e.g. 512 processors on a BlueGene/L). The reference for Capreole (original version): * Mellema, Eulderink & Icke 1991, A&A 252, 718 .. automodule:: amuse.community.capreole.interface .. autoclass:: Capreole .. autoparametersattribute:: parameters .. _fi: fi -- FI is a parallel TreeSPH code for galaxy simulations. Extensively rewritten, extended and parallelized, it is a development from code from Jeroen Gerritsen and Roelof Bottema, which itself goes back to Treesph. The relevant references are: * Hernquist \& Katz 1989, ApJS 70, 419 * Gerritsen \& Icke 1997, A&A 325, 972 * Pelupessy, van der Werf & Icke 2004, A&A 422, 55 * Pelupessy, PhD thesis 2005, Leiden Observatory .. automodule:: amuse.community.fi.interface .. autoclass:: Fi .. autoparametersattribute:: parameters .. _gadget2: gadget2 ------- GADGET-2 computes gravitational forces with a hierarchical tree algorithm (optionally in combination with a particle-mesh scheme for long-range gravitational forces, currently not supported from the AMUSE interface) and represents fluids by means of smoothed particle hydrodynamics (SPH). The code can be used for studies of isolated systems, or for simulations that include the cosmological expansion of space, both with or without periodic boundary conditions. In all these types of simulations, GADGET follows the evolution of a self- gravitating collisionless N-body system, and allows gas dynamics to be optionally included. Both the force computation and the time stepping of GADGET are fully adaptive, with a dynamic range which is, in principle, unlimited. The relevant references are: * Springel V., 2005, MNRAS, 364, 1105 (GADGET-2) * Springel V., Yoshida N., White S. D. M., 2001, New Astronomy, 6, 51 (GADGET-1) .. automodule:: amuse.community.gadget2.interface .. autoclass:: Gadget2 .. autoparametersattribute:: parameters .. _Radiative-Transfer: Radiative Transfer ~~~~~~~~~~~~~~~~~~ * SimpleX_ (Delaunay triangulation based) SimpleX ------- SimpleX computes the transport of radiation on an irregular grid composed of the Delaunay triangulation of a particle set. Radiation is transported along the vertices of the triangulation. The code can be considered as a particle based radiative transfer code: in this case particles sample the gas density, but can be both absorbers and sources of radiation. Calculation time with SimpleX scales linearly with the number of particles. At the moment the code calculates the transport of ionizing radiation in the grey (one frequency) approximation. It is especially well suited to couple with SPH codes. Specifics ######### * particle sets send to SimpleX must have attributes x [pc], y[pc], z[pc], rho [amu/cm**3], flux [s**-1] and xion [none]. * care must be taken that the particle sets fit in the box_size * the default hilbert_order should work for most particle distributions .. automodule:: amuse.community.simplex.interface .. autoclass:: SimpleX .. autoparametersattribute:: parameters References: * Paardekooper J.-P., 2010, PhD thesis, University of Leiden * Paardekooper J.-P., Kruip, C. J. H., Icke V., 2010, A&A, 515, 79 (SimpleX2) * Ritzerveld, J., & Icke, V. 2006, Phys. Rev. E, 74, 26704 (SimpleX) Work in progress