Hpl. dat tune

Source: Internet
Author: User
Hpl Tuning
After having built the executable hpl/bin/<arch>/xhpl, one may want to modify the input data file hpl. dat. this file shoshould reside in the same directory as the executable hpl/bin/<arch>/xhpl. an example hpl. DAT file is provided by default. this file contains information about the problem sizes, machine configuration, and algorithm features to be used by the executable. it is 31 lines long. all the selected parameters will be printed in the output generated by the executable.

We first describe the meaning of each line of this input file below. Finally, a few useful experimental guide lines to set up the file are given at the end of this page.

--------------------------------------------------------------------------------

Description of the hpl. dat file
Line 1: (unused) typically one wocould use this line for its own good. For example, it cocould be used to summarize the content of the input file. By default this line reads:
Hpl linpack Benchmark Input File

--------------------------------------------------------------------------------
Line 2: (unused) Same as line 1. By default this line reads:
Innovative computing laboratory, University of Tennessee

--------------------------------------------------------------------------------
Line 3: The user can choose where the output shocould be redirected. in the case of a file, a name is necessary, and this is the line where one wants to specify it. only the first name on this line is significant. by default, the line reads:
Hpl. Out output file name (if any)
This means that if one chooses to redirect the output to a file, the file will be called "hpl. out ". the rest of the line is unused, and this space to put some Ive comment on the meaning of this line.

--------------------------------------------------------------------------------
Line 4: This line specifies where the output shoshould go. the line is formatted, it must begin with a positive integer, the rest is unsignificant. 3 choices are possible for the positive integer, 6 means that the output will go the standard output, 7 means that the output will go to the standard error. any other integer means that the output shocould be redirected to a file, which name has been specified in the line above. this line by default reads:
6 device out (6 = stdout, 7 = stderr, file)
Which means that the output generated by the executable shocould be redirected to the standard output.

--------------------------------------------------------------------------------
Line 5: This line specifies the number of problem sizes to be executed. this number shoshould be less than or equal to 20. the first integer is significant, the rest is ignored. if the line reads:
3 # of problems sizes (N)
This means that the user is willing to run 3 problem sizes that will be specified in the next line.

--------------------------------------------------------------------------------
Line 6: This line specifies the problem sizes one wants to run. Assuming the line abve started with 3, the 3 first positive integers are significant, the rest is ignored. For example:
3000 6000 10000 NS
Means that one wants xhpl to run 3 (specified in line 5) problem sizes, namely 3000,600 0 and 10000.

--------------------------------------------------------------------------------
Line 7: This line specifies the number of block sizes to be runned. this number shoshould be less than or equal to 20. the first integer is significant, the rest is ignored. if the line reads:
5 # of NBS
This means that the user is willing to use 5 block sizes that will be specified in the next line.

--------------------------------------------------------------------------------
Line 8: This line specifies the block sizes one wants to run. Assuming the line abve started with 5, the 5 first positive integers are significant, the rest is ignored. For example:
80 100 120 140 160 NBS
Means that one wants xhpl to use 5 (specified in line 7) block sizes, namely 80,100,120,140 and 160.

--------------------------------------------------------------------------------
Line 9: This line specifies how the MPI processes shocould be mapped onto the nodes of your platform. there are currently two possible mappings, namely row-and column-major. this feature is mainly useful when these nodes are themselves multi-processor computers. A row-major mapping is recommended.

--------------------------------------------------------------------------------
Line 10: This line specifies the number of process grid to be runned. this number shoshould be less than or equal to 20. the first integer is significant, the rest is ignored. if the line reads:
2 # of process grids (P x q)
This means that you are willing to try 2 process grid sizes that will be specified in the next line.

--------------------------------------------------------------------------------
Line 11-12: These two lines specify the number of process rows and columns of each grid you want to run on. assuming the line above (10) started with 2, the 2 first positive integers of those two lines are significant, the rest is ignored. for example:
1 2 ps6 8 Qs
Means that one wants to run xhpl on 2 process grids (Line 10), namely 1-by-6 and 2-by-8. note: In this example, It is required then to start xhpl on at least 16 nodes (max of Pi-by-qi ). the runs on the two grids will be consecutive. if one was starting xhpl on more than 16 nodes, say 52, only 6 wocould be used for the first grid (1X6) and then 16 (2x8) wocould be used for the second grid. the fact that you started the MPI job on 52 nodes, will not make hpl use all of them. in this example, only 16 wocould be used. if one wants to run xhpl with 52 processes one needs to specify a grid of 52 processes, for example the following lines wocould do the job:
4 2 ps13 8 Qs

--------------------------------------------------------------------------------
Line 13: This line specifies the threshold to which the residuals shoshould be compared. the residuals shoshould be or Order 1, but are in practice slightly less than this, typically 0.001. this line is made of a real number, the rest is not significant. for example:
16.0 threshold
In practice, a value of 16.0 will cover most cases. for various reasons, it is possible that some of the residuals become slightly larger, say for example 35.6. xhpl will flag those runs as failed, however they can be considered as correct. A run shoshould be considered as failed if the residual is a few order of magn1_bigger than 1 For example 10 ^ 6 or more. note: if one was to specify a threshold of 0.0, all tests wocould be flagged as failed, even though the answer is likely to be correct. it is allowed to specify a negative value for this threshold, in which case the checks will be by-passed, no matter what the threshold value is, as soon as it is negative. this feature allows to save time when Ming a lot of experiments, say for instance during the tuning phase. example:
-16.0 threshold

--------------------------------------------------------------------------------
The remaning lines allow to specifies algorithmic features. xhpl will run all possible combinations of those for each problem size, block size, process grid combination. this is handy when one looks for an "Optimal" set of parameters. to understand a little bit better, let say first a few words about the algorithm implemented in hpl. basically this is a right-looking version with row-partial coloring ting. the Panel factorization is matrix-matrix operation based and recursive, dividing the panel into ndiv subpanels at each step. this part of the panel factorization is denoted below by "recursive panel fact. (rfact )". the recursion stops when the current Panel is made of less than or equal to nbmin columns. at that point, xhpl uses a matrix-Vector Operation Based factorization denoted below by "pfacts ". classic recursion wowould then use ndiv = 2, nbmin = 1. there are essential 3 numerically equivalent LU factorization algorithm variants (left-looking, crout and right-looking ). in hpl, one can choose every one of those for the rfact, as well as the pfact. the following lines of hpl. dat allows you to set those Parameter

 

 

 

Lines 14-21: (Example 1)
3 # of panel fact0 1 2 pfacts (0 = left, 1 = Crout, 2 = right) 4 # of recursive stopping criterium1 2 4 8 nbmins (> = 1) 3 # of panels in recursion2 3 4 ndivs3 # of recursive panel fact.0 1 2 rfacts (0 = left, 1 = Crout, 2 = right)
This example wocould try all variants of pfact, 4 values for nbmin, namely 1, 2, 4 and 8, 3 values for ndiv namely 2, 3 and 4, and all variants for rfact.

Lines 14-21: (Example 2)
2 # of panel fact2 0 pfacts (0 = left, 1 = Crout, 2 = right) 2 # of recursive stopping criterium4 8 nbmins (> = 1) 1 # of panels in recursion2 ndivs1 # of recursive panel fact.2 rfacts (0 = left, 1 = Crout, 2 = right)
This example wocould try 2 variants of pfact namely right looking and left looking, 2 values for nbmin, namely 4 and 8, 1 value for ndiv namely 2, and one variant for rfact.

--------------------------------------------------------------------------------
In the main loop of the algorithm, the current Panel of column is broadcast in process rows using a virtual ring topology. hpl offers various choices and one most likely want to use the increasing ring modified encoded as 1. 3 and 4 are also good choices.

Lines 22-23: (Example 1)
1 # Of broadcast1 bcasts (0 = 1rg, 1 = 1rm, 2 = 2rg, 3 = 2rm, 4 = LNG, 5 = LNM)
This will cause hpl to broadcast the current Panel using the increasing ring modified topology.

Lines 22-23: (Example 2)
2 # Of broadcast0 4 bcasts (0 = 1rg, 1 = 1rm, 2 = 2rg, 3 = 2rm, 4 = LNG, 5 = LNM)
This will cause hpl to broadcast the current Panel using the increasing ring virtual topology and the long message algorithm.

--------------------------------------------------------------------------------
Lines 24-25 allow to specify the look-ahead Depth Used by hpl. a depth of 0 means that the next panel is factorized after the update by the current Panel is completely finished. a depth of 1 means that the next panel is immediately factorized after being updated. the update by the current Panel is then finished. a depth of K means that the K next panels are factorized immediately after being updated. the update by the current Panel is then finished. it turns out that a depth of 1 seems to give the best results, but may need a large problem size before one can see the performance gain. so use 1, if you do not know better, otherwise you may want to try 0. look-ahead of depths 3 and larger will probably not give you better results.

Lines 24-25: (Example 1 ):
1 # Of lookahead depth1 depths (> = 0)
This will cause hpl to use a look-ahead of depth 1.

Lines 24-25: (Example 2 ):
2 # Of lookahead depth0 1 depths (> = 0)
This will cause hpl to use a look-ahead of depths 0 and 1.

--------------------------------------------------------------------------------
Lines 26-27 allow to specify the swapping algorithm used by hpl for all tests. there are currently two swapping algorithms available, one based on "binary exchange" and the other one based on a "spread-roll" procedure (also called "long" below ). for large problem sizes, this last one is likely to be more efficient. the user can also choose to mix both variants, that is "binary-exchange" for a number of columns less than a threshold value, and then the "spread-roll" algorithm. this threshold value is then specified on line 27.

Lines 26-27: (Example 1 ):
1 swap (0 = bin-exch, 1 = long, 2 = mix) 60 swapping threshold
This will cause hpl to use the "long" or "spread-roll" swapping algorithm. Note that a threshold is specified in that example but not used by hpl.

Lines 26-27: (Example 2 ):
2 swap (0 = bin-exch, 1 = long, 2 = mix) 60 swapping threshold
This will cause hpl to use the "long" or "spread-roll" swapping algorithm as soon as there is more than 60 columns in the row panel. otherwise, the "binary-exchange" algorithm will be used instead.

--------------------------------------------------------------------------------
Line 28 allows to specify whether the upper triangle of the panel of columns shoshould be stored in no-transposed or transposed form. Example:
0 L1 in (0 = transposed, 1 = No-transposed) Form

--------------------------------------------------------------------------------
Line 29 allows to specify whether the panel of rows U shoshould be stored in no-transposed or transposed form. Example:
0 U in (0 = transposed, 1 = No-transposed) Form

--------------------------------------------------------------------------------
Line 30 enables/disables the equilibration phase. This option will not be used unless you selected 1 or 2 in line 26. Example:
1 equilibration (0 = No, 1 = yes)

--------------------------------------------------------------------------------
Line 31 allows to specify the alignment in memory for the memory space allocated by hpl. on Modern Machines, one probably wants to use 4, 8 or 16. this may result in a tiny amount of memory wasted. example:
8 memory alignment in double (> 0)

--------------------------------------------------------------------------------

Guide Lines
Figure out a good block size for the matrix multiply routine. the best method is to try a few out. if you happen to know the block size used by the matrix-matrix multiply routine, a small multiple of that block size will do fine. this participating topic is discussed in the FAQs section.

The process mapping shocould not matter if the nodes of your platform are single processor computers. If these nodes are multi-processors, a row-major mapping is recommended.

Hpl likes "square" or slightly flat process grids. unless you are using a very small process grid, stay away from the 1-by-q and P-by-1 process grids. this participating topic is also discussed in the FAQs section.

Panel factorization parameters: a good start are the following for the lines 14-21:
1 # of panel fact1 pfacts (0 = left, 1 = Crout, 2 = right) 2 # of recursive stopping criterium4 8 nbmins (> = 1) 1 # of panels in recursion2 ndivs1 # of recursive panel fact.2 rfacts (0 = left, 1 = Crout, 2 = right)
Broadcast parameters: At this time it is far from obvious to me what the best setting is, so I wowould probably try them all. if I had to guess I wowould probably start with the following for the lines 22-23:
2 # Of broadcast1 3 bcasts (0 = 1rg, 1 = 1rm, 2 = 2rg, 3 = 2rm, 4 = LNG, 5 = LNM)
The best broadcast depends on your problem size and harware performance. My take is that 4 or 5 may be competitive for machines featuring very fast nodes comparatively to the network.

Look-ahead depth: As mentioned above 0 or 1 are likely to be the best choices. this also depends on the problem size and machine configuration, so I wocould try "No look-ahead (0)" and "look-ahead of depth 1 (1 )". that is for lines 24-25:
2 # Of lookahead depth0 1 depths (> = 0)
Swapping: one can select only one of the three algorithm in the input file. theoretically, mix (2) shoshould win, however long (1) might just be good enough. the difference shoshould be small between those two assuming a swapping threshold of the Order of the block size (NB) selected. if this threshold is very large, hpl will use bin_exch (0) most of the time and if it is very small (<Nb) Long (1) will always be used. in short and assuming the block size (NB) used is say 60, I wowould choose for the lines 26-27:
2 swap (0 = bin-exch, 1 = long, 2 = mix) 60 swapping threshold
I wocould also try the long variant. For a very small number of processes in every column of the Process grid (say <4), very little performance difference shocould be observable.

Local Storage: I do not think line 28 matters. pick 0 in doubt. line 29 is more important. it controls how the Panel of rows shoshould be stored. no doubt 0 is better. the caveat is that in that case the matrix-multiply function is called with (notrans, trans ,...), that is C: = C-a B ^ t. unless the computational kernel you are using has a very poor (with respect to performance) implementation of that case, and is much more efficient with (notrans, notrans ,...) just pick 0 as well. so, my choice:
0 L1 in (0 = transposed, 1 = No-transposed) form0 U in (0 = transposed, 1 = No-transposed) Form
Equilibration: it is hard to tell whether equilibration shoshould always be med or not. not knowing much about the random matrix generated and because the overhead is so small compared to the possible gain, I turn it on all the time.
1 equilibration (0 = No, 1 = yes)
For alignment, 4 shocould be plenty, but just to be safe, one may want to pick 8 instead.
8 memory alignment in double (> 0)

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.