This file is indexed.

/usr/share/doc/simgrid/html/options.html is in simgrid-doc 3.14.159-2.

This file is owned by root:root, with mode 0o644.

The actual contents of the file can be viewed below.

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
<meta http-equiv="X-UA-Compatible" content="IE=9"/>
<title>SimGrid: Configure SimGrid</title>
<link href="tabs.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="jquery.js"></script>
<script type="text/javascript" src="dynsections.js"></script>
<link href="navtree.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="resize.js"></script>
<script type="text/javascript" src="navtreedata.js"></script>
<script type="text/javascript" src="navtree.js"></script>
<script type="text/javascript">
  $(document).ready(initResizable);
</script>
<link href="search/search.css" rel="stylesheet" type="text/css"/>
<script type="text/javascript" src="search/searchdata.js"></script>
<script type="text/javascript" src="search/search.js"></script>
<script type="text/javascript">
  $(document).ready(function() { init_search(); });
</script>
<script type="text/x-mathjax-config">
  MathJax.Hub.Config({
    extensions: ["tex2jax.js"],
    jax: ["input/TeX","output/HTML-CSS"],
});
</script><script type="text/javascript" src="/usr/share/javascript/mathjax/MathJax.js/MathJax.js"></script>
<link href="stylesheet.css" rel="stylesheet" type="text/css" />
</head>
<body>
<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
<div id="titlearea">
<table cellspacing="0" cellpadding="0">
 <tbody>
 <tr style="height: 56px;">
  <td style="padding-left: 0.5em;">
   <div id="projectname">SimGrid
   &#160;<span id="projectnumber">3.14.159</span>
   </div>
   <div id="projectbrief">Versatile Simulation of Distributed Systems</div>
  </td>
 </tr>
 </tbody>
</table>
</div>
<div id="navrow1" class="tabs">
    <ul class="tablist">
      <li><a href="http://simgrid.gforge.inria.fr/"><span>Home page</span></a></li>
      <li><a href="http://simgrid.gforge.inria.fr/documentation.html"><span>Online documentation</span></a></li>
      <li><a href="javadoc"><span>Java documentation</span></a></li>
      <li><a href="https://gforge.inria.fr/projects/simgrid"><span>Dev's Corner</span></a></li>
      <li>        <div id="MSearchBox" class="MSearchBoxInactive">
        <span class="left">
          <img id="MSearchSelect" src="search/mag_sel.png"
               onmouseover="return searchBox.OnSearchSelectShow()"
               onmouseout="return searchBox.OnSearchSelectHide()"
               alt=""/>
          <input type="text" id="MSearchField" value="Search" accesskey="S"
               onfocus="searchBox.OnSearchFieldFocus(true)" 
               onblur="searchBox.OnSearchFieldFocus(false)" 
               onkeyup="searchBox.OnSearchFieldChange(event)"/>
          </span><span class="right">
            <a id="MSearchClose" href="javascript:searchBox.CloseResultsWindow()"><img id="MSearchCloseImg" border="0" src="search/close.png" alt=""/></a>
          </span>
        </div>
</li>
    </ul>
  </div> 
<!-- end header part -->
<!-- Generated by Doxygen 1.8.13 -->
<script type="text/javascript">
var searchBox = new SearchBox("searchBox", "search",false,'Search');
</script>
</div><!-- top -->
<div id="side-nav" class="ui-resizable side-nav-resizable">
  <div id="nav-tree">
    <div id="nav-tree-contents">
      <div id="nav-sync" class="sync"></div>
    </div>
  </div>
  <div id="splitbar" style="-moz-user-select:none;" 
       class="ui-resizable-handle">
  </div>
</div>
<script type="text/javascript">
$(document).ready(function(){initNavTree('options.html','');});
</script>
<div id="doc-content">
<!-- window showing the filter options -->
<div id="MSearchSelectWindow"
     onmouseover="return searchBox.OnSearchSelectShow()"
     onmouseout="return searchBox.OnSearchSelectHide()"
     onkeydown="return searchBox.OnSearchSelectKey(event)">
</div>

<!-- iframe showing the search results (closed by default) -->
<div id="MSearchResultsWindow">
<iframe src="javascript:void(0)" frameborder="0" 
        name="MSearchResults" id="MSearchResults">
</iframe>
</div>

<div class="header">
  <div class="headertitle">
<div class="title">Configure SimGrid </div>  </div>
</div><!--header-->
<div class="contents">
<div class="toc"><h3>Table of Contents</h3>
<ul><li class="level1"><a href="#options_using">Passing configuration options to the simulators</a></li>
<li class="level1"><a href="#options_model">Configuring the platform models</a><ul><li class="level2"><a href="#options_model_select">Selecting the platform models</a></li>
<li class="level2"><a href="#options_generic_plugin">Plugins</a></li>
<li class="level2"><a href="#options_model_optim">Optimization level of the platform models</a></li>
<li class="level2"><a href="#options_model_precision">Numerical precision of the platform models</a></li>
<li class="level2"><a href="#options_concurrency_limit">Concurrency limit</a></li>
<li class="level2"><a href="#options_model_network">Configuring the Network model</a><ul><li class="level3"><a href="#options_model_network_gamma">Maximal TCP window size</a></li>
<li class="level3"><a href="#options_model_network_coefs">Correcting important network parameters</a></li>
<li class="level3"><a href="#options_model_network_crosstraffic">Simulating cross-traffic</a></li>
<li class="level3"><a href="#options_model_network_sendergap">Simulating sender gap</a></li>
<li class="level3"><a href="#options_model_network_asyncsend">Simulating asyncronous send</a></li>
<li class="level3"><a href="#options_pls">Configuring packet-level pseudo-models</a></li>
</ul>
</li>
<li class="level2"><a href="#options_model_storage">Configuring the Storage model</a><ul><li class="level3"><a href="#option_model_storage_maxfd">Maximum amount of file descriptors per host</a></li>
</ul>
</li>
</ul>
</li>
<li class="level1"><a href="#options_modelchecking">Configuring the Model-Checking</a><ul><li class="level2"><a href="#options_modelchecking_liveness">Specifying a liveness property</a></li>
<li class="level2"><a href="#options_modelchecking_steps">Going for stateful verification</a></li>
<li class="level2"><a href="#options_modelchecking_reduction">Specifying the kind of reduction</a></li>
<li class="level2"><a href="#options_modelchecking_visited">model-check/visited, Cycle detection</a></li>
<li class="level2"><a href="#options_modelchecking_termination">model-check/termination, Non termination detection</a></li>
<li class="level2"><a href="#options_modelchecking_dot_output">model-check/dot-output, Dot output</a></li>
<li class="level2"><a href="#options_modelchecking_max_depth">model-check/max_depth, Depth limit</a></li>
<li class="level2"><a href="#options_modelchecking_timeout">Handling of timeout</a></li>
<li class="level2"><a href="#options_modelchecking_comm_determinism">Communication determinism</a></li>
<li class="level2"><a href="#options_modelchecking_sparse_checkpoint">Per page checkpoints</a></li>
<li class="level2"><a href="#options_mc_perf">Performance considerations for the model checker</a></li>
<li class="level2"><a href="#options_modelchecking_hash">Hashing of the state (experimental)</a></li>
<li class="level2"><a href="#options_modelchecking_recordreplay">Record/replay (experimental)</a></li>
</ul>
</li>
<li class="level1"><a href="#options_virt">Configuring the User Process Virtualization</a><ul><li class="level2"><a href="#options_virt_factory">Selecting the virtualization factory</a></li>
<li class="level2"><a href="#options_virt_stacksize">Adapting the used stack size</a></li>
<li class="level2"><a href="#options_virt_parallel">Running user code in parallel</a></li>
</ul>
</li>
<li class="level1"><a href="#options_tracing">Configuring the tracing subsystem</a></li>
<li class="level1"><a href="#options_msg">Configuring MSG</a><ul><li class="level2"><a href="#options_msg_debug_multiple_use">Debugging MSG</a></li>
</ul>
</li>
<li class="level1"><a href="#options_smpi">Configuring SMPI</a><ul><li class="level2"><a href="#options_smpi_bench">smpi/bench: Automatic benchmarking of SMPI code</a></li>
<li class="level2"><a href="#options_model_smpi_adj_file">smpi/comp-adjustment-file: Slow-down or speed-up parts of your code.</a></li>
<li class="level2"><a href="#options_model_smpi_bw_factor">smpi/bw-factor: Bandwidth factors</a></li>
<li class="level2"><a href="#options_smpi_timing">smpi/display-timing: Reporting simulation time</a></li>
<li class="level2"><a href="#options_model_smpi_lat_factor">smpi/lat-factor: Latency factors</a></li>
<li class="level2"><a href="#options_smpi_papi_events">smpi/papi-events: Trace hardware counters with PAPI</a></li>
<li class="level2"><a href="#options_smpi_global">smpi/privatize-global-variables: Automatic privatization of global variables</a></li>
<li class="level2"><a href="#options_model_smpi_detached">Simulating MPI detached send</a></li>
<li class="level2"><a href="#options_model_smpi_collectives">Simulating MPI collective algorithms</a></li>
<li class="level2"><a href="#options_model_smpi_iprobe">smpi/iprobe: Inject constant times for calls to MPI_Iprobe</a></li>
<li class="level2"><a href="#options_model_smpi_init">smpi/init: Inject constant times for calls to MPI_Init</a></li>
<li class="level2"><a href="#options_model_smpi_ois">smpi/ois: Inject constant times for asynchronous send operations</a></li>
<li class="level2"><a href="#options_model_smpi_os">smpi/os: Inject constant times for send operations</a></li>
<li class="level2"><a href="#options_model_smpi_or">smpi/or: Inject constant times for receive operations</a></li>
<li class="level2"><a href="#options_model_smpi_test">smpi/test: Inject constant times for calls to MPI_Test</a></li>
<li class="level2"><a href="#options_model_smpi_use_shared_malloc">smpi/use-shared-malloc: Factorize malloc()s</a></li>
<li class="level2"><a href="#options_model_smpi_wtime">smpi/wtime: Inject constant times for calls to MPI_Wtime</a></li>
</ul>
</li>
<li class="level1"><a href="#options_generic">Configuring other aspects of SimGrid</a><ul><li class="level2"><a href="#options_generic_clean_atexit">Cleanup before termination</a></li>
<li class="level2"><a href="#options_generic_path">XML file inclusion path</a></li>
<li class="level2"><a href="#options_generic_exit">Behavior on Ctrl-C</a></li>
<li class="level2"><a href="#options_exception_cutpath">Truncate local path from exception backtrace</a></li>
</ul>
</li>
<li class="level1"><a href="#options_log">Logging Configuration</a></li>
<li class="level1"><a href="#options_perf">Performance optimizations</a><ul><li class="level2"><a href="#options_perf_context">Context factory</a></li>
<li class="level2"><a href="#options_perf_guard_size">Disabling stack guard pages</a></li>
</ul>
</li>
<li class="level1"><a href="#options_index">Index of all existing configuration options</a><ul><li class="level2"><a href="#options_index_smpi_coll">Index of SMPI collective algorithms options</a></li>
</ul>
</li>
</ul>
</div>
<div class="textblock"><p>A number of options can be given at runtime to change the default SimGrid behavior. For a complete list of all configuration options accepted by the SimGrid version used in your simulator, simply pass the &ndash;help configuration flag to your program. If some of the options are not documented on this page, this is a bug that you should please report so that we can fix it. Note that some of the options presented here may not be available in your simulators, depending on the <a class="el" href="install.html#install_src_config">compile-time options</a> that you used.</p>
<h1><a class="anchor" id="options_using"></a>
Passing configuration options to the simulators</h1>
<p>There is several way to pass configuration options to the simulators. The most common way is to use the <code>&ndash;cfg</code> command line argument. For example, to set the item <code>Item</code> to the value <code>Value</code>, simply type the following:</p><pre class="fragment">my_simulator --cfg=Item:Value (other arguments)
</pre><p>Several <code><code>--cfg</code></code> command line arguments can naturally be used. If you need to include spaces in the argument, don't forget to quote the argument. You can even escape the included quotes (write \' for ' if you have your argument between ').</p>
<p>Another solution is to use the <code>&lt;config&gt;</code> tag in the platform file. The only restriction is that this tag must occure before the first platform element (be it <code>&lt;AS&gt;</code>, <code>&lt;cluster&gt;</code>, <code>&lt;peer&gt;</code> or whatever). The <code>&lt;config&gt;</code> tag takes an <code>id</code> attribute, but it is currently ignored so you don't really need to pass it. The important par is that within that tag, you can pass one or several <code>&lt;prop&gt;</code> tags to specify the configuration to use. For example, setting <code>Item</code> to <code>Value</code> can be done by adding the following to the beginning of your platform file: </p><pre class="fragment">&lt;config&gt;
  &lt;prop id="Item" value="Value"/&gt;
&lt;/config&gt;
</pre><p>A last solution is to pass your configuration directly using the C interface. If you happen to use the MSG interface, this is very easy with the <a class="el" href="group__msg__simulation.html#ga35037b57281f860b92ed7704d37de78f" title="set a configuration variable ">MSG_config()</a> function. If you do not use MSG, that's a bit more complex, as you have to mess with the internal configuration set directly as follows. Check the <a class="el" href="group__XBT__config.html">relevant page</a> for details on all the functions you can use in this context, <code>_sg_cfg_set</code> being the only configuration set currently used in SimGrid.</p>
<div class="fragment"><div class="line"><span class="preprocessor">#include &lt;<a class="code" href="config_8h.html">xbt/config.h</a>&gt;</span></div><div class="line"></div><div class="line"><span class="keyword">extern</span> <a class="code" href="group__XBT__cfg__use.html#gac5894d3947cb042db07d729ebfe064ca">xbt_cfg_t</a> _sg_cfg_set;</div><div class="line"></div><div class="line"><span class="keywordtype">int</span> <a class="code" href="simgrid__units__main_8c.html#a0ddf1224851353fc92bfbff6f499fa97">main</a>(<span class="keywordtype">int</span> <a class="code" href="smpi__mpi_8cpp.html#ae49201de8b92b4ec0f93ff1f92acaa06">argc</a>, <span class="keywordtype">char</span> *<a class="code" href="smpi_8h.html#aa56fbd45be3b015aa96e8716d7fd9f5a">argv</a>[]) {</div><div class="line">     <a class="code" href="group__SD__simulation.html#gaa4c19ede9d99b8925e62f30c52f1e65f">SD_init</a>(&amp;argc, argv);</div><div class="line"></div><div class="line">     <span class="comment">/* Prefer MSG_config() if you use MSG!! */</span></div><div class="line">     <a class="code" href="group__XBT__cfg__use.html#ga2e47237697a0b38da8d00b9e141e0774">xbt_cfg_set_parse</a>(_sg_cfg_set,<span class="stringliteral">&quot;Item:Value&quot;</span>);</div><div class="line"></div><div class="line">     <span class="comment">// Rest of your code</span></div><div class="line">}</div></div><!-- fragment --><h1><a class="anchor" id="options_model"></a>
Configuring the platform models</h1>
<p><a class="anchor" id="options_storage_model"></a><a class="anchor" id="options_vm_model"></a></p>
<h2><a class="anchor" id="options_model_select"></a>
Selecting the platform models</h2>
<p>SimGrid comes with several network, CPU and storage models built in, and you can change the used model at runtime by changing the passed configuration. The three main configuration items are given below. For each of these items, passing the special <code>help</code> value gives you a short description of all possible values. Also, <code>&ndash;help-models</code> should provide information about all models for all existing resources.</p><ul>
<li><b>network/model</b>: specify the used network model</li>
<li><b>cpu/model</b>: specify the used CPU model</li>
<li><b>host/model</b>: specify the used host model</li>
<li><b>storage/model</b>: specify the used storage model (there is currently only one such model - this option is hence only useful for future releases)</li>
<li><b>vm/model</b>: specify the model for virtual machines (there is currently only one such model - this option is hence only useful for future releases)</li>
</ul>
<p>As of writing, the following network models are accepted. Over the time new models can be added, and some experimental models can be removed; check the values on your simulators for an uptodate information. Note that the CM02 model is described in the research report <a href="ftp://ftp.ens-lyon.fr/pub/LIP/Rapports/RR/RR2002/RR2002-40.ps.gz">A Network Model for Simulation of Grid Application</a> while LV08 is described in <a href="http://mescal.imag.fr/membres/arnaud.legrand/articles/simutools09.pdf">Accuracy Study and Improvement of Network Simulation in the SimGrid Framework</a>.</p>
<ul>
<li><b>LV08</b> (default one): Realistic network analytic model (slow-start modeled by multiplying latency by 10.4, bandwidth by .92; bottleneck sharing uses a payload of S=8775 for evaluating RTT)</li>
<li><a class="anchor" id="options_model_select_network_constant"></a><b>Constant:</b> Simplistic network model where all communication take a constant time (one second). This model provides the lowest realism, but is (marginally) faster.</li>
<li><b>SMPI:</b> Realistic network model specifically tailored for HPC settings (accurate modeling of slow start with correction factors on three intervals: &lt; 1KiB, &lt; 64 KiB, &gt;= 64 KiB). See also <a class="el" href="options.html#options_model_network_coefs">this section</a> for more info.</li>
<li><b>IB:</b> Realistic network model specifically tailored for HPC settings with InfiniBand networks (accurate modeling contention behavior, based on the model explained in <a href="http://mescal.imag.fr/membres/jean-marc.vincent/index.html/PhD/Vienne.pdf">http://mescal.imag.fr/membres/jean-marc.vincent/index.html/PhD/Vienne.pdf</a>). See also <a class="el" href="options.html#options_model_network_coefs">this section</a> for more info.</li>
<li><b>CM02:</b> Legacy network analytic model (Very similar to LV08, but without corrective factors. The timings of small messages are thus poorly modeled)</li>
<li><b>Reno:</b> Model from Steven H. Low using lagrange_solve instead of lmm_solve (experts only; check the code for more info).</li>
<li><b>Reno2:</b> Model from Steven H. Low using lagrange_solve instead of lmm_solve (experts only; check the code for more info).</li>
<li><b>Vegas:</b> Model from Steven H. Low using lagrange_solve instead of lmm_solve (experts only; check the code for more info).</li>
</ul>
<p>If you compiled SimGrid accordingly, you can use packet-level network simulators as network models (see <a class="el" href="pls_ns3.html">ns-3 as a SimGrid model</a>). In that case, you have two extra models, described below, and some <a class="el" href="options.html#options_pls">specific</a>additional configuration flags".</p><ul>
<li><b>NS3:</b> Network pseudo-model using the NS3 tcp model</li>
</ul>
<p>Concerning the CPU, we have only one model for now:</p><ul>
<li><b>Cas01:</b> Simplistic CPU model (time=size/power)</li>
</ul>
<p>The host concept is the aggregation of a CPU with a network card. Three models exists, but actually, only 2 of them are interesting. The "compound" one is simply due to the way our internal code is organized, and can easily be ignored. So at the end, you have two host models: The default one allows to aggregate an existing CPU model with an existing network model, but does not allow parallel tasks because these beasts need some collaboration between the network and CPU model. That is why, ptask_07 is used by default when using SimDag.</p><ul>
<li><b>default:</b> Default host model. Currently, CPU:Cas01 and network:LV08 (with cross traffic enabled)</li>
<li><b>compound:</b> Host model that is automatically chosen if you change the network and CPU models</li>
<li><b>ptask_L07:</b> Host model somehow similar to Cas01+CM02 but allowing "parallel tasks", that are intended to model the moldable tasks of the grid scheduling literature.</li>
</ul>
<h2><a class="anchor" id="options_generic_plugin"></a>
Plugins</h2>
<p>SimGrid supports the use of plugins; currently, no known plugins can be activated but there are use-cases where you may want to write your own plugin (for instance, for logging).</p>
<p>Plugins can for instance define own classes that inherit from existing classes (for instance, a class "CpuEnergy" inherits from "Cpu" to assess energy consumption).</p>
<p>The plugin connects to the code by registering callbacks using <code>signal.connect(callback)</code> (see file <code><a class="el" href="energy_8cpp.html">src/surf/plugins/energy.cpp</a></code> for details).</p>
<pre class="fragment">    --cfg=plugin:Energy
</pre><dl class="section note"><dt>Note</dt><dd>This option is case-sensitive: Energy and energy are not the same!</dd></dl>
<h2><a class="anchor" id="options_model_optim"></a>
Optimization level of the platform models</h2>
<p>The network and CPU models that are based on lmm_solve (that is, all our analytical models) accept specific optimization configurations.</p><ul>
<li>items <b>network/optim</b> and <b>cpu/optim</b> (both default to 'Lazy'):<ul>
<li><b>Lazy:</b> Lazy action management (partial invalidation in lmm + heap in action remaining).</li>
<li><b>TI:</b> Trace integration. Highly optimized mode when using availability traces (only available for the Cas01 CPU model for now).</li>
<li><b>Full:</b> Full update of remaining and variables. Slow but may be useful when debugging.</li>
</ul>
</li>
<li>items <b>network/maxmin-selective-update</b> and <b>cpu/maxmin-selective-update</b>: configure whether the underlying should be lazily updated or not. It should have no impact on the computed timings, but should speed up the computation.</li>
</ul>
<p>It is still possible to disable the <code>maxmin-selective-update</code> feature because it can reveal counter-productive in very specific scenarios where the interaction level is high. In particular, if all your communication share a given backbone link, you should disable it: without <code>maxmin-selective-update</code>, every communications are updated at each step through a simple loop over them. With that feature enabled, every communications will still get updated in this case (because of the dependency induced by the backbone), but through a complicated pattern aiming at following the actual dependencies.</p>
<h2><a class="anchor" id="options_model_precision"></a>
Numerical precision of the platform models</h2>
<p>The analytical models handle a lot of floating point values. It is possible to change the epsilon used to update and compare them through the <b>maxmin/precision</b> item (default value: 0.00001). Changing it may speedup the simulation by discarding very small actions, at the price of a reduced numerical precision.</p>
<h2><a class="anchor" id="options_concurrency_limit"></a>
Concurrency limit</h2>
<p>The maximum number of variables in a system can be tuned through the <b>maxmin/concurrency_limit</b> item (default value: 100). Setting a higher value can lift some limitations, such as the number of concurrent processes running on a single host.</p>
<h2><a class="anchor" id="options_model_network"></a>
Configuring the Network model</h2>
<h3><a class="anchor" id="options_model_network_gamma"></a>
Maximal TCP window size</h3>
<p>The analytical models need to know the maximal TCP window size to take the TCP congestion mechanism into account. This is set to 20000 by default, but can be changed using the <b>network/TCP-gamma</b> item.</p>
<p>On linux, this value can be retrieved using the following commands. Both give a set of values, and you should use the last one, which is the maximal size.</p><pre class="fragment">cat /proc/sys/net/ipv4/tcp_rmem # gives the sender window
cat /proc/sys/net/ipv4/tcp_wmem # gives the receiver window
</pre><h3><a class="anchor" id="options_model_network_coefs"></a>
Correcting important network parameters</h3>
<p>SimGrid can take network irregularities such as a slow startup or changing behavior depending on the message size into account. You should not change these values unless you really know what you're doing.</p>
<p>The corresponding values were computed through data fitting one the timings of packet-level simulators.</p>
<p>See <a href="http://mescal.imag.fr/membres/arnaud.legrand/articles/simutools09.pdf">Accuracy Study and Improvement of Network Simulation in the SimGrid Framework</a> for more information about these parameters.</p>
<p>If you are using the SMPI model, these correction coefficients are themselves corrected by constant values depending on the size of the exchange. Again, only hardcore experts should bother about this fact.</p>
<p>InfiniBand network behavior can be modeled through 3 parameters, as explained in <a href="http://mescal.imag.fr/membres/jean-marc.vincent/index.html/PhD/Vienne.pdf">this PhD thesis</a>. These factors can be changed through the following option:</p>
<pre class="fragment">smpi/IB-penalty-factors:"βe;βs;γs"
</pre><p>By default SMPI uses factors computed on the Stampede Supercomputer at TACC, with optimal deployment of processes on nodes.</p>
<h3><a class="anchor" id="options_model_network_crosstraffic"></a>
Simulating cross-traffic</h3>
<p>As of SimGrid v3.7, cross-traffic effects can be taken into account in analytical simulations. It means that ongoing and incoming communication flows are treated independently. In addition, the LV08 model adds 0.05 of usage on the opposite direction for each new created flow. This can be useful to simulate some important TCP phenomena such as ack compression.</p>
<p>For that to work, your platform must have two links for each pair of interconnected hosts. An example of usable platform is available in <code>examples/platforms/crosstraffic.xml</code>.</p>
<p>This is activated through the <b>network/crosstraffic</b> item, that can be set to 0 (disable this feature) or 1 (enable it).</p>
<p>Note that with the default host model this option is activated by default.</p>
<h3><a class="anchor" id="options_model_network_sendergap"></a>
Simulating sender gap</h3>
<p>(this configuration item is experimental and may change or disapear)</p>
<p>It is possible to specify a timing gap between consecutive emission on the same network card through the <b>network/sender-gap</b> item. This is still under investigation as of writting, and the default value is to wait 10 microseconds (1e-5 seconds) between emissions.</p>
<h3><a class="anchor" id="options_model_network_asyncsend"></a>
Simulating asyncronous send</h3>
<p>(this configuration item is experimental and may change or disapear)</p>
<p>It is possible to specify that messages below a certain size will be sent as soon as the call to MPI_Send is issued, without waiting for the correspondant receive. This threshold can be configured through the <b>smpi/async-small-thresh</b> item. The default value is 0. This behavior can also be manually set for MSG mailboxes, by setting the receiving mode of the mailbox with a call to <a class="el" href="group__msg__mailbox__management.html#ga6f960676ac24fb9c64da2bfcc6f24da8">MSG_mailbox_set_async</a> . For MSG, all messages sent to this mailbox will have this behavior, so consider using two mailboxes if needed.</p>
<p>This value needs to be smaller than or equals to the threshold set at <a class="el" href="options.html#options_model_smpi_detached">Simulating MPI detached send</a> , because asynchronous messages are meant to be detached as well.</p>
<h3><a class="anchor" id="options_pls"></a>
Configuring packet-level pseudo-models</h3>
<p>When using the packet-level pseudo-models, several specific configuration flags are provided to configure the associated tools. There is by far not enough such SimGrid flags to cover every aspects of the associated tools, since we only added the items that we needed ourselves. Feel free to request more items (or even better: provide patches adding more items).</p>
<p>When using NS3, the only existing item is <b>ns3/TcpModel</b>, corresponding to the ns3::TcpL4Protocol::SocketType configuration item in NS3. The only valid values (enforced on the SimGrid side) are 'NewReno' or 'Reno' or 'Tahoe'.</p>
<h2><a class="anchor" id="options_model_storage"></a>
Configuring the Storage model</h2>
<h3><a class="anchor" id="option_model_storage_maxfd"></a>
Maximum amount of file descriptors per host</h3>
<p>Each host maintains a fixed-size array of its file descriptors. You can change its size (1024 by default) through the <b>storage/max_file_descriptors</b> item to either enlarge it if your application requires it or to reduce it to save memory space.</p>
<h1><a class="anchor" id="options_modelchecking"></a>
Configuring the Model-Checking</h1>
<p>To enable the SimGrid model-checking support the program should be executed using the simgrid-mc wrapper: </p><pre class="fragment">simgrid-mc ./my_program
</pre><p>Safety properties are expressed as assertions using the function </p><pre class="fragment">void MC_assert(int prop);
</pre><h2><a class="anchor" id="options_modelchecking_liveness"></a>
Specifying a liveness property</h2>
<p>If you want to specify liveness properties (beware, that's experimental), you have to pass them on the command line, specifying the name of the file containing the property, as formatted by the ltl2ba program.</p>
<pre class="fragment">--cfg=model-check/property:&lt;filename&gt;
</pre><h2><a class="anchor" id="options_modelchecking_steps"></a>
Going for stateful verification</h2>
<p>By default, the system is backtracked to its initial state to explore another path instead of backtracking to the exact step before the fork that we want to explore (this is called stateless verification). This is done this way because saving intermediate states can rapidly exhaust the available memory. If you want, you can change the value of the <code>model-check/checkpoint</code> variable. For example, the following configuration will ask to take a checkpoint every step. Beware, this will certainly explode your memory. Larger values are probably better, make sure to experiment a bit to find the right setting for your specific system.</p>
<pre class="fragment">--cfg=model-check/checkpoint:1
</pre><h2><a class="anchor" id="options_modelchecking_reduction"></a>
Specifying the kind of reduction</h2>
<p>The main issue when using the model-checking is the state space explosion. To counter that problem, several exploration reduction techniques can be used. There is unfortunately no silver bullet here, and the most efficient reduction techniques cannot be applied to any properties. In particular, the DPOR method cannot be applied on liveness properties since it may break some cycles in the exploration that are important to the property validity.</p>
<pre class="fragment">--cfg=model-check/reduction:&lt;technique&gt;
</pre><p>For now, this configuration variable can take 2 values: none: Do not apply any kind of reduction (mandatory for now for liveness properties) dpor: Apply Dynamic Partial Ordering Reduction. Only valid if you verify local safety properties.</p>
<h2><a class="anchor" id="options_modelchecking_visited"></a>
model-check/visited, Cycle detection</h2>
<p>In order to detect cycles, the model-checker needs to check if a new explored state is in fact the same state than a previous one. In order to do this, the model-checker can take a snapshot of each visited state: this snapshot is then used to compare it with subsequent states in the exploration graph.</p>
<p>The <b>model-check/visited</b> is the maximum number of states which are stored in memory. If the maximum number of snapshotted state is reached some states will be removed from the memory and some cycles might be missed.</p>
<p>By default, no state is snapshotted and cycles cannot be detected.</p>
<h2><a class="anchor" id="options_modelchecking_termination"></a>
model-check/termination, Non termination detection</h2>
<p>The <b>model-check/termination</b> configuration item can be used to report if a non-termination execution path has been found. This is a path with a cycle which means that the program might never terminate.</p>
<p>This only works in safety mode.</p>
<p>This options is disabled by default.</p>
<h2><a class="anchor" id="options_modelchecking_dot_output"></a>
model-check/dot-output, Dot output</h2>
<p>If set, the <b>model-check/dot-output</b> configuration item is the name of a file in which to write a dot file of the path leading the found property (safety or liveness violation) as well as the cycle for liveness properties. This dot file can then fed to the graphviz dot tool to generate an corresponding graphical representation.</p>
<h2><a class="anchor" id="options_modelchecking_max_depth"></a>
model-check/max_depth, Depth limit</h2>
<p>The <b>model-checker/max-depth</b> can set the maximum depth of the exploration graph of the model-checker. If this limit is reached, a logging message is sent and the results might not be exact.</p>
<p>By default, there is not depth limit.</p>
<h2><a class="anchor" id="options_modelchecking_timeout"></a>
Handling of timeout</h2>
<p>By default, the model-checker does not handle timeout conditions: the <code>wait</code> operations never time out. With the <b>model-check/timeout</b> configuration item set to <b>yes</b>, the model-checker will explore timeouts of <code>wait</code> operations.</p>
<h2><a class="anchor" id="options_modelchecking_comm_determinism"></a>
Communication determinism</h2>
<p>The <b>model-check/communications-determinism</b> and <b>model-check/send-determinism</b> items can be used to select the communication determinism mode of the model-checker which checks determinism properties of the communications of an application.</p>
<h2><a class="anchor" id="options_modelchecking_sparse_checkpoint"></a>
Per page checkpoints</h2>
<p>When the model-checker is configured to take a snapshot of each explored state (with the <b>model-checker/visited</b> item), the memory consumption can rapidly reach GiB ou Tib of memory. However, for many workloads, the memory does not change much between different snapshots and taking a complete copy of each snapshot is a waste of memory.</p>
<p>The <b>model-check/sparse-checkpoint</b> option item can be set to <b>yes</b> in order to avoid making a complete copy of each snapshot: instead, each snapshot will be decomposed in blocks which will be stored separately. If multiple snapshots share the same block (or if the same block is used in the same snapshot), the same copy of the block will be shared leading to a reduction of the memory footprint.</p>
<p>For many applications, this option considerably reduces the memory consumption. In somes cases, the model-checker might be slightly slower because of the time taken to manage the metadata about the blocks. In other cases however, this snapshotting strategy will be much faster by reducing the cache consumption. When the memory consumption is important, by avoiding to hit the swap or reducing the swap usage, this option might be much faster than the basic snapshotting strategy.</p>
<p>This option is currently disabled by default.</p>
<h2><a class="anchor" id="options_mc_perf"></a>
Performance considerations for the model checker</h2>
<p>The size of the stacks can have a huge impact on the memory consumption when using model-checking. By default, each snapshot will save a copy of the whole stacks and not only of the part which is really meaningful: you should expect the contribution of the memory consumption of the snapshots to be \( \mbox{number of processes} \times \mbox{stack size} \times \mbox{number of states} \).</p>
<p>The <b>model-check/sparse-checkpoint</b> can be used to reduce the memory consumption by trying to share memory between the different snapshots.</p>
<p>When compiled against the model checker, the stacks are not protected with guards: if the stack size is too small for your application, the stack will silently overflow on other parts of the memory.</p>
<h2><a class="anchor" id="options_modelchecking_hash"></a>
Hashing of the state (experimental)</h2>
<p>Usually most of the time of the model-checker is spent comparing states. This process is complicated and consumes a lot of bandwidth and cache. In order to speedup the state comparison, the experimental <b>model-checker/hash</b> configuration item enables the computation of a hash summarizing as much information of the state as possible into a single value. This hash can be used to avoid most of the comparisons: the costly comparison is then only used when the hashes are identical.</p>
<p>Currently most of the state is not included in the hash because the implementation was found to be buggy and this options is not as useful as it could be. For this reason, it is currently disabled by default.</p>
<h2><a class="anchor" id="options_modelchecking_recordreplay"></a>
Record/replay (experimental)</h2>
<p>As the model-checker keeps jumping at different places in the execution graph, it is difficult to understand what happens when trying to debug an application under the model-checker. Event the output of the program is difficult to interpret. Moreover, the model-checker does not behave nicely with advanced debugging tools such as valgrind. For those reason, to identify a trajectory in the execution graph with the model-checker and replay this trajcetory and without the model-checker black-magic but with more standard tools (such as a debugger, valgrind, etc.). For this reason, Simgrid implements an experimental record/replay functionnality in order to record a trajectory with the model-checker and replay it without the model-checker.</p>
<p>When the model-checker finds an interesting path in the application execution graph (where a safety or liveness property is violated), it can generate an identifier for this path. In order to enable this behavious the <b>model-check/record</b> must be set to <b>yes</b>. By default, this behaviour is not enabled.</p>
<p>This is an example of output:</p>
<pre>
[  0.000000] (0:@) Check a safety property
[  0.000000] (0:@) **************************
[  0.000000] (0:@) *** PROPERTY NOT VALID ***
[  0.000000] (0:@) **************************
[  0.000000] (0:@) Counter-example execution trace:
[  0.000000] (0:@) Path = 1/3;1/4
[  0.000000] (0:@) [(1)Tremblay (app)] MC_RANDOM(3)
[  0.000000] (0:@) [(1)Tremblay (app)] MC_RANDOM(4)
[  0.000000] (0:@) Expanded states = 27
[  0.000000] (0:@) Visited states = 68
[  0.000000] (0:@) Executed transitions = 46
</pre><p>This path can then be replayed outside of the model-checker (and even in non-MC build of simgrid) by setting the <b>model-check/replay</b> item to the given path. The other options should be the same (but the model-checker should be disabled).</p>
<p>The format and meaning of the path may change between different releases so the same release of Simgrid should be used for the record phase and the replay phase.</p>
<h1><a class="anchor" id="options_virt"></a>
Configuring the User Process Virtualization</h1>
<h2><a class="anchor" id="options_virt_factory"></a>
Selecting the virtualization factory</h2>
<p>In SimGrid, the user code is virtualized in a specific mechanism that allows the simulation kernel to control its execution: when a user process requires a blocking action (such as sending a message), it is interrupted, and only gets released when the simulated clock reaches the point where the blocking operation is done.</p>
<p>In SimGrid, the containers in which user processes are virtualized are called contexts. Several context factory are provided, and you can select the one you want to use with the <b>contexts/factory</b> configuration item. Some of the following may not exist on your machine because of portability issues. In any case, the default one should be the most effcient one (please report bugs if the auto-detection fails for you). They are sorted here from the slowest to the most effient:</p><ul>
<li><b>thread:</b> very slow factory using full featured threads (either pthreads or windows native threads)</li>
<li><b>ucontext:</b> fast factory using System V contexts (or a portability layer of our own on top of Windows fibers)</li>
<li><b>raw:</b> amazingly fast factory using a context switching mechanism of our own, directly implemented in assembly (only available for x86 and amd64 platforms for now)</li>
<li><b>boost:</b> This uses the <a href="http://www.boost.org/doc/libs/1_59_0/libs/context/doc/html/index.html">context implementation</a> of the boost library; you must have this library installed before you compile SimGrid. (On Debian GNU/Linux based systems, this is provided by the libboost-contexts-dev package.)</li>
</ul>
<p>The only reason to change this setting is when the debugging tools get fooled by the optimized context factories. Threads are the most debugging-friendly contextes, as they allow to set breakpoints anywhere with gdb and visualize backtraces for all processes, in order to debug concurrency issues. Valgrind is also more comfortable with threads, but it should be usable with all factories.</p>
<h2><a class="anchor" id="options_virt_stacksize"></a>
Adapting the used stack size</h2>
<p>Each virtualized used process is executed using a specific system stack. The size of this stack has a huge impact on the simulation scalability, but its default value is rather large. This is because the error messages that you get when the stack size is too small are rather disturbing: this leads to stack overflow (overwriting other stacks), leading to segfaults with corrupted stack traces.</p>
<p>If you want to push the scalability limits of your code, you might want to reduce the <b>contexts/stack-size</b> item. Its default value is 8192 (in KiB), while our Chord simulation works with stacks as small as 16 KiB, for example. For the thread factory, the default value is the one of the system, if it is too large/small, it has to be set with this parameter.</p>
<p>The operating system should only allocate memory for the pages of the stack which are actually used and you might not need to use this in most cases. However, this setting is very important when using the model checker (see <a class="el" href="options.html#options_mc_perf">Performance considerations for the model checker</a>).</p>
<p>In some cases, no stack guard page is used and the stack will silently overflow on other parts of the memory if the stack size is too small for your application. This happens :</p>
<ul>
<li>on Windows systems;</li>
<li>when the model checker is enabled;</li>
<li>when stack guard pages are explicitely disabled (see <a class="el" href="options.html#options_perf_guard_size">Disabling stack guard pages</a>).</li>
</ul>
<h2><a class="anchor" id="options_virt_parallel"></a>
Running user code in parallel</h2>
<p>Parallel execution of the user code is only considered stable in SimGrid v3.7 and higher. It is described in <a href="http://hal.inria.fr/inria-00602216/">INRIA RR-7653</a>.</p>
<p>If you are using the <code>ucontext</code> or <code>raw</code> context factories, you can request to execute the user code in parallel. Several threads are launched, each of them handling as much user contexts at each run. To actiave this, set the <b>contexts/nthreads</b> item to the amount of cores that you have in your computer (or lower than 1 to have the amount of cores auto-detected).</p>
<p>Even if you asked several worker threads using the previous option, you can request to start the parallel execution (and pay the associated synchronization costs) only if the potential parallelism is large enough. For that, set the <b>contexts/parallel-threshold</b> item to the minimal amount of user contexts needed to start the parallel execution. In any given simulation round, if that amount is not reached, the contexts will be run sequentially directly by the main thread (thus saving the synchronization costs). Note that this option is mainly useful when the grain of the user code is very fine, because our synchronization is now very efficient.</p>
<p>When parallel execution is activated, you can choose the synchronization schema used with the <b>contexts/synchro</b> item, which value is either:</p><ul>
<li><b>futex:</b> ultra optimized synchronisation schema, based on futexes (fast user-mode mutexes), and thus only available on Linux systems. This is the default mode when available.</li>
<li><b>posix:</b> slow but portable synchronisation using only POSIX primitives.</li>
<li><b>busy_wait:</b> not really a synchronisation: the worker threads constantly request new contexts to execute. It should be the most efficient synchronisation schema, but it loads all the cores of your machine for no good reason. You probably prefer the other less eager schemas.</li>
</ul>
<h1><a class="anchor" id="options_tracing"></a>
Configuring the tracing subsystem</h1>
<p>The <a class="el" href="outcomes_vizu.html">tracing subsystem</a> can be configured in several different ways depending on the nature of the simulator (MSG, SimDag, SMPI) and the kind of traces that need to be obtained. See the <a class="el" href="outcomes_vizu.html#tracing_tracing_options">Tracing Configuration Options subsection</a> to get a detailed description of each configuration option.</p>
<p>We detail here a simple way to get the traces working for you, even if you never used the tracing API.</p>
<ul>
<li>Any SimGrid-based simulator (MSG, SimDag, SMPI, ...) and raw traces: <pre class="fragment">--cfg=tracing:yes --cfg=tracing/uncategorized:yes --cfg=triva/uncategorized:uncat.plist
</pre> The first parameter activates the tracing subsystem, the second tells it to trace host and link utilization (without any categorization) and the third creates a graph configuration file to configure Triva when analysing the resulting trace file.</li>
<li>MSG or SimDag-based simulator and categorized traces (you need to declare categories and classify your tasks according to them) <pre class="fragment">--cfg=tracing:yes --cfg=tracing/categorized:yes --cfg=triva/categorized:cat.plist
</pre> The first parameter activates the tracing subsystem, the second tells it to trace host and link categorized utilization and the third creates a graph configuration file to configure Triva when analysing the resulting trace file.</li>
<li>SMPI simulator and traces for a space/time view: <pre class="fragment">smpirun -trace ...
</pre> The <em>-trace</em> parameter for the smpirun script runs the simulation with &ndash;cfg=tracing:yes and &ndash;cfg=tracing/smpi:yes. Check the smpirun's <em>-help</em> parameter for additional tracing options.</li>
</ul>
<p>Sometimes you might want to put additional information on the trace to correctly identify them later, or to provide data that can be used to reproduce an experiment. You have two ways to do that:</p>
<ul>
<li>Add a string on top of the trace file as comment: <pre class="fragment">--cfg=tracing/comment:my_simulation_identifier
</pre></li>
<li>Add the contents of a textual file on top of the trace file as comment: <pre class="fragment">--cfg=tracing/comment-file:my_file_with_additional_information.txt
</pre></li>
</ul>
<p>Please, use these two parameters (for comments) to make reproducible simulations. For additional details about this and all tracing options, check See the <a class="el" href="outcomes_vizu.html#tracing_tracing_options">Tracing configuration Options</a>.</p>
<h1><a class="anchor" id="options_msg"></a>
Configuring MSG</h1>
<h2><a class="anchor" id="options_msg_debug_multiple_use"></a>
Debugging MSG</h2>
<p>Sometimes your application may try to send a task that is still being executed somewhere else, making it impossible to send this task. However, for debugging purposes, one may want to know what the other host is/was doing. This option shows a backtrace of the other process.</p>
<p>Enable this option by adding</p>
<pre class="fragment">--cfg=msg/debug-multiple-use:on
</pre><h1><a class="anchor" id="options_smpi"></a>
Configuring SMPI</h1>
<p>The SMPI interface provides several specific configuration items. These are uneasy to see since the code is usually launched through the <code>smiprun</code> script directly.</p>
<h2><a class="anchor" id="options_smpi_bench"></a>
smpi/bench: Automatic benchmarking of SMPI code</h2>
<p>In SMPI, the sequential code is automatically benchmarked, and these computations are automatically reported to the simulator. That is to say that if you have a large computation between a <code><a class="el" href="smpi__mpi_8cpp.html#a673a404d56efaffff28672a05027ef56">MPI_Recv()</a></code> and a <code><a class="el" href="smpi__mpi_8cpp.html#aeee7e111d9f54a12fc129b0f0e6df4da">MPI_Send()</a></code>, SMPI will automatically benchmark the duration of this code, and create an execution task within the simulator to take this into account. For that, the actual duration is measured on the host machine and then scaled to the power of the corresponding simulated machine. The variable <b>smpi/host-speed</b> allows to specify the computational speed of the host machine (in flop/s) to use when scaling the execution times. It defaults to 20000, but you really want to update it to get accurate simulation results.</p>
<p>When the code is constituted of numerous consecutive MPI calls, the previous mechanism feeds the simulation kernel with numerous tiny computations. The <b>smpi/cpu-threshold</b> item becomes handy when this impacts badly the simulation performance. It specifies a threshold (in seconds) below which the execution chunks are not reported to the simulation kernel (default value: 1e-6).</p>
<dl class="section note"><dt>Note</dt><dd>The option smpi/cpu-threshold ignores any computation time spent below this threshold. SMPI does not consider the <em>amount</em> of these computations; there is no offset for this. Hence, by using a value that is too low, you may end up with unreliable simulation results.</dd></dl>
<p>In some cases, however, one may wish to disable simulation of application computation. This is the case when SMPI is used not to simulate an MPI applications, but instead an MPI code that performs "live replay" of another MPI app (e.g., ScalaTrace's replay tool, various on-line simulators that run an app at scale). In this case the computation of the replay/simulation logic should not be simulated by SMPI. Instead, the replay tool or on-line simulator will issue "computation events", which correspond to the actual MPI simulation being replayed/simulated. At the moment, these computation events can be simulated using SMPI by calling internal smpi_execute*() functions.</p>
<p>To disable the benchmarking/simulation of computation in the simulated application, the variable <b>smpi/simulate-computation</b> should be set to no.</p>
<dl class="section note"><dt>Note</dt><dd>This option just ignores the timings in your simulation; it still executes the computations itself. If you want to stop SMPI from doing that, you should check the SMPI_SAMPLE macros, documented in the section <a class="el" href="group__SMPI__API.html#SMPI_adapting_speed">Toward faster simulations</a>.</dd></dl>
<table class="doxtable">
<tr>
<th>Solution </th><th>Computations actually executed? </th><th>Computations simulated ?  </th></tr>
<tr>
<td>&ndash;cfg=smpi/simulate-computation:no </td><td>Yes </td><td>No, never </td></tr>
<tr>
<td>&ndash;cfg=smpi/cpu-threshold:42 </td><td>Yes, in all cases </td><td>Only if it lasts more than 42 seconds </td></tr>
<tr>
<td>SMPI_SAMPLE() macro </td><td>Only once per loop nest (see <a class="el" href="group__SMPI__API.html#SMPI_adapting_speed">documentation</a>) </td><td>Always </td></tr>
</table>
<h2><a class="anchor" id="options_model_smpi_adj_file"></a>
smpi/comp-adjustment-file: Slow-down or speed-up parts of your code.</h2>
<p>This option allows you to pass a file that contains two columns: The first column defines the section that will be subject to a speedup; the second column is the speedup.</p>
<p>For instance:</p>
<pre class="fragment">"start:stop","ratio"
"exchange_1.f:30:exchange_1.f:130",1.18244559422142
</pre><p>The first line is the header - you must include it. The following line means that the code between two consecutive MPI calls on line 30 in exchange_1.f and line 130 in exchange_1.f should receive a speedup of 1.18244559422142. The value for the second column is therefore a speedup, if it is larger than 1 and a slow-down if it is smaller than 1. Nothing will be changed if it is equal to 1.</p>
<p>Of course, you can set any arbitrary filenames you want (so the start and end don't have to be in the same file), but be aware that this mechanism only supports <em>consecutive</em> calls!</p>
<dl class="section note"><dt>Note</dt><dd>Please note that you must pass the <b>-trace-call-location</b> flag to smpicc or smpiff, respectively! This flag activates some macro definitions in our <a class="el" href="mpi_8h.html">mpi.h</a> / mpi.f files that help with obtaining the call location.</dd></dl>
<h2><a class="anchor" id="options_model_smpi_bw_factor"></a>
smpi/bw-factor: Bandwidth factors</h2>
<p>The possible throughput of network links is often dependent on the message sizes, as protocols may adapt to different message sizes. With this option, a series of message sizes and factors are given, helping the simulation to be more realistic. For instance, the current default value is</p>
<pre class="fragment">65472:0.940694;15424:0.697866;9376:0.58729;5776:1.08739;3484:0.77493;1426:0.608902;732:0.341987;257:0.338112;0:0.812084
</pre><p>So, messages with size 65472 and more will get a total of MAX_BANDWIDTH*0.940694, messages of size 15424 to 65471 will get MAX_BANDWIDTH*0.697866 and so on. Here, MAX_BANDWIDTH denotes the bandwidth of the link.</p>
<dl class="section note"><dt>Note</dt><dd>The SimGrid-Team has developed a script to help you determine these values. You can find more information and the download here:<ol type="1">
<li><a href="http://simgrid.gforge.inria.fr/contrib/smpi-calibration-doc.html">http://simgrid.gforge.inria.fr/contrib/smpi-calibration-doc.html</a></li>
<li><a href="http://simgrid.gforge.inria.fr/contrib/smpi-saturation-doc.html">http://simgrid.gforge.inria.fr/contrib/smpi-saturation-doc.html</a></li>
</ol>
</dd></dl>
<h2><a class="anchor" id="options_smpi_timing"></a>
smpi/display-timing: Reporting simulation time</h2>
<p><b>Default:</b> 0 (false)</p>
<p>Most of the time, you run MPI code with SMPI to compute the time it would take to run it on a platform. But since the code is run through the <code>smpirun</code> script, you don't have any control on the launcher code, making it difficult to report the simulated time when the simulation ends. If you set the <b>smpi/display-timing</b> item to 1, <code>smpirun</code> will display this information when the simulation ends.</p><pre class="fragment">Simulation time: 1e3 seconds.
</pre><h2><a class="anchor" id="options_model_smpi_lat_factor"></a>
smpi/lat-factor: Latency factors</h2>
<p>The motivation and syntax for this option is identical to the motivation/syntax of smpi/bw-factor, see <a class="el" href="options.html#options_model_smpi_bw_factor">smpi/bw-factor: Bandwidth factors</a> for details.</p>
<p>There is an important difference, though: While smpi/bw-factor <em>reduces</em> the actual bandwidth (i.e., values between 0 and 1 are valid), latency factors increase the latency, i.e., values larger than or equal to 1 are valid here.</p>
<p>This is the default value:</p>
<pre class="fragment">65472:11.6436;15424:3.48845;9376:2.59299;5776:2.18796;3484:1.88101;1426:1.61075;732:1.9503;257:1.95341;0:2.01467
</pre><dl class="section note"><dt>Note</dt><dd>The SimGrid-Team has developed a script to help you determine these values. You can find more information and the download here:<ol type="1">
<li><a href="http://simgrid.gforge.inria.fr/contrib/smpi-calibration-doc.html">http://simgrid.gforge.inria.fr/contrib/smpi-calibration-doc.html</a></li>
<li><a href="http://simgrid.gforge.inria.fr/contrib/smpi-saturation-doc.html">http://simgrid.gforge.inria.fr/contrib/smpi-saturation-doc.html</a></li>
</ol>
</dd></dl>
<h2><a class="anchor" id="options_smpi_papi_events"></a>
smpi/papi-events: Trace hardware counters with PAPI</h2>
<dl class="section warning"><dt>Warning</dt><dd>This option is experimental and will be subject to change. This feature currently requires superuser privileges, as registers are queried. Only use this feature with code you trust! Call smpirun for instance via smpirun -wrapper "sudo " &lt;your-parameters&gt; or run sudo sh -c "echo 0 &gt; /proc/sys/kernel/perf_event_paranoid" In the later case, sudo will not be required.</dd></dl>
<dl class="section note"><dt>Note</dt><dd>This option is only available when SimGrid was compiled with PAPI support.</dd></dl>
<p>This option takes the names of PAPI counters and adds their respective values to the trace files. (See Section <a class="el" href="outcomes_vizu.html#tracing_tracing_options">Tracing configuration Options</a>.)</p>
<p>It is planned to make this feature available on a per-process (or per-thread?) basis. The first draft, however, just implements a "global" (i.e., for all processes) set of counters, the "default" set.</p>
<pre class="fragment">--cfg=smpi/papi-events:"default:PAPI_L3_LDM:PAPI_L2_LDM"
</pre><h2><a class="anchor" id="options_smpi_global"></a>
smpi/privatize-global-variables: Automatic privatization of global variables</h2>
<p>MPI executables are meant to be executed in separated processes, but SMPI is executed in only one process. Global variables from executables will be placed in the same memory zone and shared between processes, causing hard to find bugs. To avoid this, several options are possible :</p><ul>
<li>Manual edition of the code, for example to add __thread keyword before data declaration, which allows the resulting code to work with SMPI, but only if the thread factory (see <a class="el" href="options.html#options_virt_factory">Selecting the virtualization factory</a>) is used, as global variables are then placed in the TLS (thread local storage) segment.</li>
<li>Source-to-source transformation, to add a level of indirection to the global variables. SMPI does this for F77 codes compiled with smpiff, and used to provide coccinelle scripts for C codes, which are not functional anymore.</li>
<li>Compilation pass, to have the compiler automatically put the data in an adapted zone.</li>
<li>Runtime automatic switching of the data segments. SMPI stores a copy of each global data segment for each process, and at each context switch replaces the actual data with its copy from the right process. This mechanism uses mmap, and is for now limited to systems supporting this functionnality (all Linux and some BSD should be compatible). Another limitation is that SMPI only accounts for global variables defined in the executable. If the processes use external global variables from dynamic libraries, they won't be switched correctly. To avoid this, using static linking is advised (but not with the simgrid library, to avoid replicating its own global variables).</li>
</ul>
<p>To use this runtime automatic switching, the variable <b>smpi/privatize-global-variables</b> should be set to yes</p>
<dl class="section warning"><dt>Warning</dt><dd>This configuration option cannot be set in your platform file. You can only pass it as an argument to smpirun.</dd></dl>
<h2><a class="anchor" id="options_model_smpi_detached"></a>
Simulating MPI detached send</h2>
<p>This threshold specifies the size in bytes under which the send will return immediately. This is different from the threshold detailed in <a class="el" href="options.html#options_model_network_asyncsend">Simulating asyncronous send</a> because the message is not effectively sent when the send is posted. SMPI still waits for the correspondant receive to be posted to perform the communication operation. This threshold can be set by changing the <b>smpi/send-is-detached-thresh</b> item. The default value is 65536.</p>
<h2><a class="anchor" id="options_model_smpi_collectives"></a>
Simulating MPI collective algorithms</h2>
<p>SMPI implements more than 100 different algorithms for MPI collective communication, to accurately simulate the behavior of most of the existing MPI libraries. The <b>smpi/coll-selector</b> item can be used to use the decision logic of either OpenMPI or MPICH libraries (values: ompi or mpich, by default SMPI uses naive version of collective operations). Each collective operation can be manually selected with a <b>smpi/collective_name</b>:algo_name. Available algorithms are listed in <a class="el" href="group__SMPI__API.html#SMPI_use_colls">Simulating collective operations</a> .</p>
<h2><a class="anchor" id="options_model_smpi_iprobe"></a>
smpi/iprobe: Inject constant times for calls to MPI_Iprobe</h2>
<p><b>Default</b> value: 0.0001</p>
<p>The behavior and motivation for this configuration option is identical with <em>smpi/test</em>, see Section <a class="el" href="options.html#options_model_smpi_test">smpi/test: Inject constant times for calls to MPI_Test</a> for details.</p>
<h2><a class="anchor" id="options_model_smpi_init"></a>
smpi/init: Inject constant times for calls to MPI_Init</h2>
<p><b>Default</b> value: 0</p>
<p>The behavior for this configuration option is identical with <em>smpi/test</em>, see Section <a class="el" href="options.html#options_model_smpi_test">smpi/test: Inject constant times for calls to MPI_Test</a> for details.</p>
<h2><a class="anchor" id="options_model_smpi_ois"></a>
smpi/ois: Inject constant times for asynchronous send operations</h2>
<p>This configuration option works exactly as <em>smpi/os</em>, see Section <a class="el" href="options.html#options_model_smpi_os">smpi/os: Inject constant times for send operations</a>. Of course, <em>smpi/ois</em> is used to account for MPI_Isend instead of MPI_Send.</p>
<h2><a class="anchor" id="options_model_smpi_os"></a>
smpi/os: Inject constant times for send operations</h2>
<p>In several network models such as LogP, send (MPI_Send, MPI_Isend) and receive (MPI_Recv) operations incur costs (i.e., they consume CPU time). SMPI can factor these costs in as well, but the user has to configure SMPI accordingly as these values may vary by machine. This can be done by using smpi/os for MPI_Send operations; for MPI_Isend and MPI_Recv, use <em>smpi/ois</em> and <em>smpi/or</em>, respectively. These work exactly as <em>smpi/ois</em>.</p>
<p><em>smpi/os</em> can consist of multiple sections; each section takes three values, for example:</p>
<pre class="fragment">    1:3:2;10:5:1
</pre><p>Here, the sections are divided by ";" (that is, this example contains two sections). Furthermore, each section consists of three values.</p>
<ol type="1">
<li>The first value denotes the minimum size for this section to take effect; read it as "if message size is greater than this value (and other section has a larger
   first value that is also smaller than the message size), use this". In the first section above, this value is "1".</li>
<li>The second value is the startup time; this is a constant value that will always be charged, no matter what the size of the message. In the first section above, this value is "3".</li>
<li>The third value is the <em>per-byte</em> cost. That is, it is charged for every byte of the message (incurring cost messageSize*cost_per_byte) and hence accounts also for larger messages. In the first section of the example above, this value is "2".</li>
</ol>
<p>Now, SMPI always checks which section it should take for a given message; that is, if a message of size 11 is sent with the configuration of the example above, only the second section will be used, not the first, as the first value of the second section is closer to the message size. Hence, a message of size 11 incurs the following cost inside MPI_Send:</p>
<pre class="fragment">    5+11*1
</pre><p>As 5 is the startup cost and 1 is the cost per byte.</p>
<dl class="section note"><dt>Note</dt><dd>The order of sections can be arbitrary; they will be ordered internally.</dd></dl>
<h2><a class="anchor" id="options_model_smpi_or"></a>
smpi/or: Inject constant times for receive operations</h2>
<p>This configuration option works exactly as <em>smpi/os</em>, see Section <a class="el" href="options.html#options_model_smpi_os">smpi/os: Inject constant times for send operations</a>. Of course, <em>smpi/or</em> is used to account for MPI_Recv instead of MPI_Send.</p>
<h2><a class="anchor" id="options_model_smpi_test"></a>
smpi/test: Inject constant times for calls to MPI_Test</h2>
<p><b>Default</b> value: 0.0001</p>
<p>By setting this option, you can control the amount of time a process sleeps when <a class="el" href="smpi__extended__traces_8h.html#aaf2ffd95e7ebe269b9465a066cc49792">MPI_Test()</a> is called; this is important, because SimGrid normally only advances the time while communication is happening and thus, MPI_Test will not add to the time, resulting in a deadlock if used as a break-condition.</p>
<p>Here is an example:</p>
<div class="fragment"><div class="line">while(!flag) {</div><div class="line">    MPI_Test(request, flag, status);</div><div class="line">    ...</div><div class="line">}</div></div><!-- fragment --><dl class="section note"><dt>Note</dt><dd>Internally, in order to speed up execution, we use a counter to keep track on how often we already checked if the handle is now valid or not. Hence, we actually use counter*SLEEP_TIME, that is, the time <a class="el" href="smpi__extended__traces_8h.html#aaf2ffd95e7ebe269b9465a066cc49792">MPI_Test()</a> causes the process to sleep increases linearly with the number of previously failed tests. This behavior can be disabled by setting smpi/grow-injected-times to no. This will also disable this behavior for MPI_Iprobe.</dd></dl>
<h2><a class="anchor" id="options_model_smpi_use_shared_malloc"></a>
smpi/use-shared-malloc: Factorize malloc()s</h2>
<p><b>Default:</b> 1</p>
<p>SMPI can use shared memory by calling shm_* functions; this might speed up the simulation. This opens or creates a new POSIX shared memory object, kept in RAM, in /dev/shm.</p>
<p>If you want to disable this behavior, set the value to 0.</p>
<h2><a class="anchor" id="options_model_smpi_wtime"></a>
smpi/wtime: Inject constant times for calls to MPI_Wtime</h2>
<p><b>Default</b> value: 0</p>
<p>By setting this option, you can control the amount of time a process sleeps when <a class="el" href="smpi__mpi_8cpp.html#a5d03c98ec6e4f7ca5cad2277f64e6b72">MPI_Wtime()</a> is called; this is important, because SimGrid normally only advances the time while communication is happening and thus, MPI_Wtime will not add to the time, resulting in a deadlock if used as a break-condition.</p>
<p>Here is an example:</p>
<div class="fragment"><div class="line">while(MPI_Wtime() &lt; some_time_bound) {</div><div class="line">    ...</div><div class="line">}</div></div><!-- fragment --><p>If the time is never advanced, this loop will clearly never end as <a class="el" href="smpi__mpi_8cpp.html#a5d03c98ec6e4f7ca5cad2277f64e6b72">MPI_Wtime()</a> always returns the same value. Hence, pass a (small) value to the smpi/wtime option to force a call to MPI_Wtime to advance the time as well.</p>
<h1><a class="anchor" id="options_generic"></a>
Configuring other aspects of SimGrid</h1>
<h2><a class="anchor" id="options_generic_clean_atexit"></a>
Cleanup before termination</h2>
<p>The C / C++ standard contains a function called <b></b> <a href="http://www.cplusplus.com/reference/cstdlib/atexit/">atexit</a>. atexit registers callbacks, which are called just before the program terminates.</p>
<p>By setting the configuration option clean-atexit to 1 (true), a callback is registered and will clean up some variables and terminate/cleanup the tracing.</p>
<p>TODO: Add when this should be used.</p>
<h2><a class="anchor" id="options_generic_path"></a>
XML file inclusion path</h2>
<p>It is possible to specify a list of directories to search into for the &lt;include&gt; tag in XML files by using the <b>path</b> configuration item. To add several directory to the path, set the configuration item several times, as in</p><pre class="fragment">--cfg=path:toto --cfg=path:tutu
</pre><h2><a class="anchor" id="options_generic_exit"></a>
Behavior on Ctrl-C</h2>
<p>By default, when Ctrl-C is pressed, the status of all existing simulated processes is displayed before exiting the simulation. This is very useful to debug your code, but it can reveal troublesome in some cases (such as when the amount of processes becomes really big). This behavior is disabled when <b>verbose-exit</b> is set to 0 (it is to 1 by default).</p>
<h2><a class="anchor" id="options_exception_cutpath"></a>
Truncate local path from exception backtrace</h2>
<pre class="fragment">--cfg=exceptions/cutpath:1
</pre><p>This configuration option is used to remove the path from the backtrace shown when an exception is thrown. This is mainly useful for the tests: the full file path makes the tests not reproducible, and thus failing as we are currently comparing output. Clearly, the path used on different machines are almost guaranteed to be different and hence, the output would mismatch, causing the test to fail.</p>
<h1><a class="anchor" id="options_log"></a>
Logging Configuration</h1>
<p>It can be done by using XBT. Go to <a class="el" href="group__XBT__log.html">Logging support</a> for more details.</p>
<h1><a class="anchor" id="options_perf"></a>
Performance optimizations</h1>
<h2><a class="anchor" id="options_perf_context"></a>
Context factory</h2>
<p>In order to achieve higher performance, you might want to use the raw context factory which avoids any system call when switching between tasks. If it is not possible you might use ucontext instead.</p>
<h2><a class="anchor" id="options_perf_guard_size"></a>
Disabling stack guard pages</h2>
<p>A stack guard page is usually used which prevents the stack from overflowing on other parts of the memory. However this might have a performance impact if a huge number of processes is created. The option <b>contexts:guard-size</b> is the number of stack guard pages used. By setting it to 0, no guard pages will be used: in this case, you should avoid using small stacks (<b>stack-size</b>) as the stack will silently overflow on other parts of the memory.</p>
<h1><a class="anchor" id="options_index"></a>
Index of all existing configuration options</h1>
<dl class="section note"><dt>Note</dt><dd>Almost all options are defined in <em>src/simgrid/sg_config.c</em>. You may want to check this file, too, but this index should be somewhat complete for the moment (May 2015).</dd>
<dd>
<b>Please</b> <b>note:</b> You can also pass the command-line option "--help" and "--help-cfg" to an executable that uses simgrid.</dd></dl>
<ul>
<li><code>clean-atexit</code>: <a class="el" href="options.html#options_generic_clean_atexit">Cleanup before termination</a></li>
<li><code>contexts/factory</code>: <a class="el" href="options.html#options_virt_factory">Selecting the virtualization factory</a></li>
<li><code>contexts/guard-size</code>: <a class="el" href="options.html#options_virt_parallel">Running user code in parallel</a></li>
<li><code>contexts/nthreads</code>: <a class="el" href="options.html#options_virt_parallel">Running user code in parallel</a></li>
<li><code>contexts/parallel_threshold</code>: <a class="el" href="options.html#options_virt_parallel">Running user code in parallel</a></li>
<li><code>contexts/stack-size</code>: <a class="el" href="options.html#options_virt_stacksize">Adapting the used stack size</a></li>
<li><code>contexts/synchro</code>: <a class="el" href="options.html#options_virt_parallel">Running user code in parallel</a></li>
<li><code>cpu/maxmin-selective-update</code>: <a class="el" href="options.html#options_model_optim">Optimization level of the platform models</a></li>
<li><code>cpu/model</code>: <a class="el" href="options.html#options_model_select">Selecting the platform models</a></li>
<li><code>cpu/optim</code>: <a class="el" href="options.html#options_model_optim">Optimization level of the platform models</a></li>
<li><code>exception/cutpath</code>: <a class="el" href="options.html#options_exception_cutpath">Truncate local path from exception backtrace</a></li>
<li><code>host/model</code>: <a class="el" href="options.html#options_model_select">Selecting the platform models</a></li>
<li><code>maxmin/precision</code>: <a class="el" href="options.html#options_model_precision">Numerical precision of the platform models</a></li>
<li><code>msg/debug-multiple-use</code>: <a class="el" href="options.html#options_msg_debug_multiple_use">Debugging MSG</a></li>
<li><code>model-check</code>: <a class="el" href="options.html#options_modelchecking">Configuring the Model-Checking</a></li>
<li><code>model-check/checkpoint</code>: <a class="el" href="options.html#options_modelchecking_steps">Going for stateful verification</a></li>
<li><code>model-check/communications-determinism</code>: <a class="el" href="options.html#options_modelchecking_comm_determinism">Communication determinism</a></li>
<li><code>model-check/dot-output</code>: <a class="el" href="options.html#options_modelchecking_dot_output">model-check/dot-output, Dot output</a></li>
<li><code>model-check/hash</code>: <a class="el" href="options.html#options_modelchecking_hash">Hashing of the state (experimental)</a></li>
<li><code>model-check/property</code>: <a class="el" href="options.html#options_modelchecking_liveness">Specifying a liveness property</a></li>
<li><code>model-check/max-depth</code>: <a class="el" href="options.html#options_modelchecking_max_depth">model-check/max_depth, Depth limit</a></li>
<li><code>model-check/record</code>: <a class="el" href="options.html#options_modelchecking_recordreplay">Record/replay (experimental)</a></li>
<li><code>model-check/reduction</code>: <a class="el" href="options.html#options_modelchecking_reduction">Specifying the kind of reduction</a></li>
<li><code>model-check/replay</code>: <a class="el" href="options.html#options_modelchecking_recordreplay">Record/replay (experimental)</a></li>
<li><code>model-check/send-determinism</code>: <a class="el" href="options.html#options_modelchecking_comm_determinism">Communication determinism</a></li>
<li><code>model-check/sparse-checkpoint</code>: <a class="el" href="options.html#options_modelchecking_sparse_checkpoint">Per page checkpoints</a></li>
<li><code>model-check/termination</code>: <a class="el" href="options.html#options_modelchecking_termination">model-check/termination, Non termination detection</a></li>
<li><code>model-check/timeout</code>: <a class="el" href="options.html#options_modelchecking_timeout">Handling of timeout</a></li>
<li><code>model-check/visited</code>: <a class="el" href="options.html#options_modelchecking_visited">model-check/visited, Cycle detection</a></li>
<li><code>network/bandwidth-factor</code>: <a class="el" href="options.html#options_model_network_coefs">Correcting important network parameters</a></li>
<li><code>network/crosstraffic</code>: <a class="el" href="options.html#options_model_network_crosstraffic">Simulating cross-traffic</a></li>
<li><code>network/latency-factor</code>: <a class="el" href="options.html#options_model_network_coefs">Correcting important network parameters</a></li>
<li><code>network/maxmin-selective-update</code>: <a class="el" href="options.html#options_model_optim">Optimization level of the platform models</a></li>
<li><code>network/model</code>: <a class="el" href="options.html#options_model_select">Selecting the platform models</a></li>
<li><code>network/optim</code>: <a class="el" href="options.html#options_model_optim">Optimization level of the platform models</a></li>
<li><code>network/sender_gap</code>: <a class="el" href="options.html#options_model_network_sendergap">Simulating sender gap</a></li>
<li><code>network/TCP-gamma</code>: <a class="el" href="options.html#options_model_network_gamma">Maximal TCP window size</a></li>
<li><code>network/weight-S</code>: <a class="el" href="options.html#options_model_network_coefs">Correcting important network parameters</a></li>
<li><code>ns3/TcpModel</code>: <a class="el" href="options.html#options_pls">Configuring packet-level pseudo-models</a></li>
<li><code>path:</code> <a class="el" href="options.html#options_generic_path">XML file inclusion path</a></li>
<li><code>plugin:</code> <a class="el" href="options.html#options_generic_plugin">Plugins</a></li>
<li><code>storage/max_file_descriptors</code>: <a class="el" href="options.html#option_model_storage_maxfd">Maximum amount of file descriptors per host</a></li>
<li><code>surf/precision</code>: <a class="el" href="options.html#options_model_precision">Numerical precision of the platform models</a></li>
<li><code><b>For</code> collective operations of SMPI, please refer to Section <a class="el" href="options.html#options_index_smpi_coll">Index of SMPI collective algorithms options</a></b></li>
<li><code>smpi/async-small-thresh</code>: <a class="el" href="options.html#options_model_network_asyncsend">Simulating asyncronous send</a></li>
<li><code>smpi/bw-factor</code>: <a class="el" href="options.html#options_model_smpi_bw_factor">smpi/bw-factor: Bandwidth factors</a></li>
<li><code>smpi/coll-selector</code>: <a class="el" href="options.html#options_model_smpi_collectives">Simulating MPI collective algorithms</a></li>
<li><code>smpi/comp-adjustment-file</code>: <a class="el" href="options.html#options_model_smpi_adj_file">smpi/comp-adjustment-file: Slow-down or speed-up parts of your code.</a></li>
<li><code>smpi/cpu-threshold</code>: <a class="el" href="options.html#options_smpi_bench">smpi/bench: Automatic benchmarking of SMPI code</a></li>
<li><code>smpi/display-timing</code>: <a class="el" href="options.html#options_smpi_timing">smpi/display-timing: Reporting simulation time</a></li>
<li><code>smpi/grow-injected-times</code>: <a class="el" href="options.html#options_model_smpi_test">smpi/test: Inject constant times for calls to MPI_Test</a></li>
<li><code>smpi/host-speed</code>: <a class="el" href="options.html#options_smpi_bench">smpi/bench: Automatic benchmarking of SMPI code</a></li>
<li><code>smpi/IB-penalty-factors</code>: <a class="el" href="options.html#options_model_network_coefs">Correcting important network parameters</a></li>
<li><code>smpi/iprobe</code>: <a class="el" href="options.html#options_model_smpi_iprobe">smpi/iprobe: Inject constant times for calls to MPI_Iprobe</a></li>
<li><code>smpi/init</code>: <a class="el" href="options.html#options_model_smpi_init">smpi/init: Inject constant times for calls to MPI_Init</a></li>
<li><code>smpi/lat-factor</code>: <a class="el" href="options.html#options_model_smpi_lat_factor">smpi/lat-factor: Latency factors</a></li>
<li><code>smpi/ois</code>: <a class="el" href="options.html#options_model_smpi_ois">smpi/ois: Inject constant times for asynchronous send operations</a></li>
<li><code>smpi/or</code>: <a class="el" href="options.html#options_model_smpi_or">smpi/or: Inject constant times for receive operations</a></li>
<li><code>smpi/os</code>: <a class="el" href="options.html#options_model_smpi_os">smpi/os: Inject constant times for send operations</a></li>
<li><code>smpi/papi-events</code>: <a class="el" href="options.html#options_smpi_papi_events">smpi/papi-events: Trace hardware counters with PAPI</a></li>
<li><code>smpi/privatize-global-variables</code>: <a class="el" href="options.html#options_smpi_global">smpi/privatize-global-variables: Automatic privatization of global variables</a></li>
<li><code>smpi/send-is-detached-thresh</code>: <a class="el" href="options.html#options_model_smpi_detached">Simulating MPI detached send</a></li>
<li><code>smpi/simulate-computation</code>: <a class="el" href="options.html#options_smpi_bench">smpi/bench: Automatic benchmarking of SMPI code</a></li>
<li><code>smpi/test</code>: <a class="el" href="options.html#options_model_smpi_test">smpi/test: Inject constant times for calls to MPI_Test</a></li>
<li><code>smpi/use-shared-malloc</code>: <a class="el" href="options.html#options_model_smpi_use_shared_malloc">smpi/use-shared-malloc: Factorize malloc()s</a></li>
<li><code>smpi/wtime</code>: <a class="el" href="options.html#options_model_smpi_wtime">smpi/wtime: Inject constant times for calls to MPI_Wtime</a></li>
<li><code><b>Tracing</code> configuration options can be found in Section <a class="el" href="outcomes_vizu.html#tracing_tracing_options">Tracing configuration Options</a></b>.</li>
<li><code>storage/model</code>: <a class="el" href="options.html#options_storage_model">options_storage_model</a></li>
<li><code>verbose-exit</code>: <a class="el" href="options.html#options_generic_exit">Behavior on Ctrl-C</a></li>
<li><code>vm/model</code>: <a class="el" href="options.html#options_vm_model">options_vm_model</a></li>
</ul>
<h2><a class="anchor" id="options_index_smpi_coll"></a>
Index of SMPI collective algorithms options</h2>
<p>TODO: All available collective algorithms will be made available via the <code>smpirun --help-coll</code> command. </p>
</div></div><!-- contents -->
</div><!-- doc-content -->
<!-- start footer part -->
<div id="nav-path" class="navpath"><!-- id is needed for treeview function! -->
  <ul>
    <li class="footer">Generated by
    <a href="http://www.doxygen.org/index.html">
    <img class="footer" src="doxygen.png" alt="doxygen"/></a> 1.8.13 </li>
  </ul>
</div>
</body>
</html>