author Aaron Becker Thu, 20 Sep 2012 21:48:33 +0000 (16:48 -0500) committer Aaron Becker Thu, 20 Sep 2012 21:48:33 +0000 (16:48 -0500)
171 files changed:
 doc/Makefile patch | blob | history doc/Makefile.common patch | blob | history doc/ampi/Makefile patch | blob | history doc/ampi/title.html [deleted file] patch | blob | history doc/assets/hatchbkgd.png [new file with mode: 0644] patch | blob doc/assets/head.html [new file with mode: 0644] patch | blob doc/assets/manual.css [new file with mode: 0644] patch | blob doc/assets/manual.js [new file with mode: 0644] patch | blob doc/bignetsim/Makefile patch | blob | history doc/bignetsim/title.html [deleted file] patch | blob | history doc/bigsim/Makefile patch | blob | history doc/bigsim/title.html [deleted file] patch | blob | history doc/charisma/Makefile patch | blob | history doc/charisma/title.html [deleted file] patch | blob | history doc/charm++/Makefile patch | blob | history doc/charm++/advancedarrays.tex [new file with mode: 0644] patch | blob doc/charm++/advancedlb.tex patch | blob | history doc/charm++/advancedpup.tex [new file with mode: 0644] patch | blob doc/charm++/alltoall.tex patch | blob | history doc/charm++/arrays.tex patch | blob | history doc/charm++/callbacks.tex patch | blob | history doc/charm++/chares.tex patch | blob | history doc/charm++/checkpoint.tex patch | blob | history doc/charm++/ckloop.tex [new file with mode: 0644] patch | blob doc/charm++/commlib.tex patch | blob | history doc/charm++/compile.tex [moved from doc/install/compile.tex with 91% similarity] patch | blob | history doc/charm++/controlpoints.tex patch | blob | history doc/charm++/credits.tex [new file with mode: 0644] patch | blob doc/charm++/delegation.tex patch | blob | history doc/charm++/entry.tex patch | blob | history doc/charm++/further.tex [deleted file] patch | blob | history doc/charm++/futures.tex patch | blob | history doc/charm++/groups.tex patch | blob | history doc/charm++/helloworld.tex [new file with mode: 0644] patch | blob doc/charm++/hetero.tex [new file with mode: 0644] patch | blob doc/charm++/history.tex [new file with mode: 0644] patch | blob doc/charm++/inhertmplt.tex patch | blob | history doc/charm++/install.tex [new file with mode: 0644] patch | blob doc/charm++/intro.tex patch | blob | history doc/charm++/io.tex [deleted file] patch | blob | history doc/charm++/loadb.tex patch | blob | history doc/charm++/machineModel.tex [new file with mode: 0644] patch | blob doc/charm++/manual.tex patch | blob | history doc/charm++/marshalling.tex patch | blob | history doc/charm++/messages.tex patch | blob | history doc/charm++/modules.tex patch | blob | history doc/charm++/mpi-interop.tex [new file with mode: 0644] patch | blob doc/charm++/nodegroups.tex patch | blob | history doc/charm++/order.tex patch | blob | history doc/charm++/othercalls.tex patch | blob | history doc/charm++/overview.tex patch | blob | history doc/charm++/pup.tex patch | blob | history doc/charm++/python.tex patch | blob | history doc/charm++/quickbigsim.tex patch | blob | history doc/charm++/quiesce.tex patch | blob | history doc/charm++/readonly.tex patch | blob | history doc/charm++/reductions.tex patch | blob | history doc/charm++/run.tex [moved from doc/install/run.tex with 94% similarity] patch | blob | history doc/charm++/sdag.tex patch | blob | history doc/charm++/sections.tex [new file with mode: 0644] patch | blob doc/charm++/startuporder.tex [new file with mode: 0644] patch | blob doc/charm++/sync.tex [new file with mode: 0644] patch | blob doc/charm++/threaded.tex [new file with mode: 0644] patch | blob doc/charm++/title.html [deleted file] patch | blob | history doc/charm++/topology.tex [new file with mode: 0644] patch | blob doc/charm++/utilities.tex [new file with mode: 0644] patch | blob doc/converse/Makefile patch | blob | history doc/converse/title.html [deleted file] patch | blob | history doc/convext/Makefile patch | blob | history doc/convext/title.html [deleted file] patch | blob | history doc/debugger/Makefile patch | blob | history doc/debugger/title.html [deleted file] patch | blob | history doc/dot.latex2html-init patch | blob | history doc/f90charm/Makefile patch | blob | history doc/f90charm/title.html [deleted file] patch | blob | history doc/faq/Makefile patch | blob | history doc/faq/title.html [deleted file] patch | blob | history doc/fem/Makefile patch | blob | history doc/fem/title.html [deleted file] patch | blob | history doc/ifem/Makefile patch | blob | history doc/ifem/title.html [deleted file] patch | blob | history doc/install/Makefile [deleted file] patch | blob | history doc/install/install.tex [deleted file] patch | blob | history doc/install/manual.tex [deleted file] patch | blob | history doc/install/title.html [deleted file] patch | blob | history doc/latex2html_fixpaths.sh [deleted file] patch | blob | history doc/libraries/Makefile patch | blob | history doc/libraries/liveviz.tex patch | blob | history doc/libraries/title.html [deleted file] patch | blob | history doc/list-charmapi.txt [new file with mode: 0644] patch | blob doc/list-cikeywords.txt [new file with mode: 0644] patch | blob doc/manual.css [deleted file] patch | blob | history doc/markupSanitizer.py [new file with mode: 0755] patch | blob doc/mblock/Makefile patch | blob | history doc/mblock/title.html [deleted file] patch | blob | history doc/navmenuGenerator.py [new file with mode: 0755] patch | blob doc/netfem/Makefile patch | blob | history doc/netfem/title.html [deleted file] patch | blob | history doc/parfum/Makefile patch | blob | history doc/parfum/title.html [deleted file] patch | blob | history doc/pose/Makefile patch | blob | history doc/pose/title.html [deleted file] patch | blob | history doc/pplmanual.sty patch | blob | history doc/pplmanual.tex patch | blob | history doc/projections/Makefile patch | blob | history doc/projections/manual.tex patch | blob | history doc/projections/title.html [deleted file] patch | blob | history doc/projections/tracing.tex [new file with mode: 0644] patch | blob doc/tcharm/Makefile patch | blob | history doc/tcharm/title.html [deleted file] patch | blob | history examples/ampi/pingpong/pingpong-1way.c patch | blob | history examples/charm++/PUP/HeapPUP/HeapObject.h [new file with mode: 0644] patch | blob examples/charm++/PUP/HeapPUP/Makefile [new file with mode: 0644] patch | blob examples/charm++/PUP/HeapPUP/SimplePUP.C [new file with mode: 0644] patch | blob examples/charm++/PUP/HeapPUP/SimplePUP.ci [new file with mode: 0644] patch | blob examples/charm++/PUP/HeapPUP/SimplePUP.h [new file with mode: 0644] patch | blob examples/charm++/PUP/Makefile [new file with mode: 0644] patch | blob examples/charm++/PUP/README [new file with mode: 0644] patch | blob examples/charm++/PUP/STLPUP/HeapObjectSTL.h [new file with mode: 0644] patch | blob examples/charm++/PUP/STLPUP/Makefile [new file with mode: 0644] patch | blob examples/charm++/PUP/STLPUP/SimplePUP.C [new file with mode: 0644] patch | blob examples/charm++/PUP/STLPUP/SimplePUP.ci [new file with mode: 0644] patch | blob examples/charm++/PUP/STLPUP/SimplePUP.h [new file with mode: 0644] patch | blob examples/charm++/PUP/SimpleObject.h [new file with mode: 0644] patch | blob examples/charm++/PUP/SimplePUP.C [new file with mode: 0644] patch | blob examples/charm++/PUP/SimplePUP.ci [new file with mode: 0644] patch | blob examples/charm++/PUP/SimplePUP.h [new file with mode: 0644] patch | blob examples/charm++/PUP/pupDisk/Makefile [moved from examples/charm++/pupDisk/Makefile with 92% similarity] patch | blob | history examples/charm++/PUP/pupDisk/README [moved from examples/charm++/pupDisk/README with 100% similarity] patch | blob | history examples/charm++/PUP/pupDisk/pupDisk.C [moved from examples/charm++/pupDisk/pupDisk.C with 100% similarity] patch | blob | history examples/charm++/PUP/pupDisk/pupDisk.ci [moved from examples/charm++/pupDisk/pupDisk.ci with 97% similarity] patch | blob | history examples/charm++/PUP/pupDisk/pupDisk.h [moved from examples/charm++/pupDisk/pupDisk.h with 100% similarity] patch | blob | history examples/charm++/PUP/pupDisk/someData.h [moved from examples/charm++/pupDisk/someData.h with 100% similarity] patch | blob | history examples/charm++/PUP/seekBlock/Makefile [new file with mode: 0644] patch | blob examples/charm++/PUP/seekBlock/seek_block.cc [new file with mode: 0644] patch | blob examples/charm++/PUP/seekBlock/seek_block.ci [new file with mode: 0644] patch | blob examples/charm++/PUP/seekBlock/seek_block.h [new file with mode: 0644] patch | blob examples/charm++/array/Makefile [new file with mode: 0644] patch | blob examples/charm++/array/pgm.C [new file with mode: 0644] patch | blob examples/charm++/array/pgm.ci [new file with mode: 0644] patch | blob examples/charm++/fib/fib.cc patch | blob | history examples/charm++/fib/fib.ci patch | blob | history examples/charm++/fib/fib.h [deleted file] patch | blob | history examples/charm++/histogram_group/Makefile [new file with mode: 0644] patch | blob examples/charm++/histogram_group/pgm.cc [new file with mode: 0644] patch | blob examples/charm++/histogram_group/pgm.ci [new file with mode: 0644] patch | blob examples/charm++/jacobi3d-sdag/jacobi3d.C patch | blob | history examples/charm++/jacobi3d-sdag/jacobi3d.ci patch | blob | history examples/charm++/threaded_ring/Makefile [new file with mode: 0644] patch | blob examples/charm++/threaded_ring/threaded_ring.cc [new file with mode: 0644] patch | blob examples/charm++/threaded_ring/threaded_ring.ci [new file with mode: 0644] patch | blob examples/charm++/threaded_ring/threaded_ring.h [new file with mode: 0644] patch | blob smart-build.pl patch | blob | history src/ck-core/ckcausalmlog.C patch | blob | history src/ck-core/cklocation.C patch | blob | history src/ck-core/ckmessagelogging.C patch | blob | history src/ck-core/ckmessagelogging.h patch | blob | history src/ck-core/ckobjid.C [new file with mode: 0644] patch | blob src/ck-core/ckobjid.h patch | blob | history src/ck-core/envelope.h patch | blob | history src/ck-ldb/CentralLB.C patch | blob | history src/ck-ldb/lbdb.h patch | blob | history src/libs/ck-libs/NDMeshStreamer/DataItemTypes.h patch | blob | history src/libs/ck-libs/NDMeshStreamer/NDMeshStreamer.ci patch | blob | history src/libs/ck-libs/NDMeshStreamer/NDMeshStreamer.h patch | blob | history src/scripts/Make.depends patch | blob | history src/scripts/Makefile patch | blob | history src/util/CrayNid.c patch | blob | history src/xlat-i/xi-main.C patch | blob | history tests/charm++/jacobi3d-sdag/Makefile patch | blob | history tests/charm++/jacobi3d-sdag/jacobi3d.C patch | blob | history

index 0ddd6d1b949fa947d9a65a2f2020e5a4f7308923..98efdc8ecd5d696c042ce2db5e21f561cf7039bd 100644 (file)
@@ -2,7 +2,7 @@ IDIR    = ../doc
LNCMD  = test ! -f pplmanual.sty && ln -f -s ../pplmanual.sty .
RMCMD  = rm -f ./pplmanual.sty
WEBDIR = /www/manuals
-DIRS   = install converse convext charm++ libraries f90charm charisma pose \
+DIRS   = converse convext charm++ libraries f90charm charisma pose \
fem ifem netfem ampi bigsim mblock projections tcharm debugger faq \
bignetsim

index 4bc1b5579e29831d91d1f16b9fb90cfb43b0dbb3..d0b77f579a05d5fcef61410a75b862805c036f33 100644 (file)
@@ -7,6 +7,7 @@
#   TEX: all TeX files to depend on (often just "manual.tex")
#   DEST: destination manual name (e.g., "fem")
#   LATEX2HTML: call to latex2html, which should be "(L2H) <args>" +# DOCTITLE: title of the manual # (optional) PROJECT_LINK: HTML to include at bottom of page # Destination directory for local copy of files (e.g., on user machine) @@ -20,11 +21,12 @@ L2H=latex2html -white -antialias -local_icons \ -long_titles 1 \ -show_section_numbers \ -top_navigation \ - -address '<p align="right">'"/bin/date +"%B %d, %Y""'<br> \ - '(PROJECT_LINK)'<a href="http://charm.cs.uiuc.edu/">Charm Homepage</a>'

DEPTEX=$(TEX)$(FILE).aux index.tex

+.PHONY: setup html1page
+
# Default target: build postscript, pdf, and html:

all: pdf ps html
@@ -49,19 +51,31 @@ $(FILE).pdf:$(TEX) $(FILE).aux pdflatex$(FILE).tex

# HTML Target:
-html: $(FILE) +html: html1page$(FILE)
+
+
+$(FILE): setup$(TEX) $(FILE).aux + export MANUALTITLE=$(DOCTITLE) && $(LATEX2HTML)$(FILE).tex && unset MANUALTITLE
+       ../navmenuGenerator.py $@/index.html >$(tmpFile)
+       for f in $@/*.html; do echo "Sanitizing $$f"; sed -i -e 's!'pwd'/!!g'$$f; ../markupSanitizer.py $$f (tmpFile) > tmpop && cat tmpop >$$f && rm tmpop; done + rm -f$(tmpFile)
+
+l2h1pagecfg = ./l2h-1page-init

-$(FILE):$(TEX) $(FILE).aux +html1page: setup$(TEX) $(FILE).aux + cp ../dot.latex2html-init$(l2h1pagecfg)
+       sed -i -e "s|MAX_SPLIT_DEPTH[ ]*=|MAX_SPLIT_DEPTH = 0; #|g" $(l2h1pagecfg) +$(L2H) -init_file $(l2h1pagecfg)$(FILE).tex
+       -@mv $(FILE)/$(FILE).html $(FILE)/$(FILE)-1p.html
+       rm -f $(l2h1pagecfg) + +setup: -@ln -s ../pplmanual.* . -@ln -s ../dot.latex2html-init .latex2html-init -@rm -fr$(FILE)/*.html $(FILE)/*.aux -@mkdir$(FILE)
-       -@ln -s ../../manual.css $(FILE) - -/bin/cp title.html$(FILE)
-       $(L2H) -split 0$(FILE).tex
-       -@mv $(FILE)/$(FILE).html $(FILE)/$(FILE)-1p.html
-       $(LATEX2HTML)$(FILE).tex
-       ../latex2html_fixpaths.sh
+       -@/bin/cp -r ../assets fig figs $(FILE) # LaTeX Index and link support$(FILE).aux: $(TEX) index.tex$(FIG_TARGET)
index a2e823d75abb79888c6e273446cc0964bee1505e..fac27c04a1a795e8074069e33fc99781910312f9 100644 (file)
@@ -3,6 +3,7 @@ FILE=manual
TEX=$(FILE).tex DEST=ampi LATEX2HTML=$(L2H) -split 4

include ../Makefile.common
diff --git a/doc/ampi/title.html b/doc/ampi/title.html
deleted file mode 100644 (file)
index 8209fc0..0000000
+++ /dev/null
@@ -1 +0,0 @@
diff --git a/doc/assets/hatchbkgd.png b/doc/assets/hatchbkgd.png
new file mode 100644 (file)
index 0000000..4941cb4
Binary files /dev/null and b/doc/assets/hatchbkgd.png differ
new file mode 100644 (file)
index 0000000..0053d87
--- /dev/null
@@ -0,0 +1,16 @@
+    <meta content='Charm++ Manual' name='content' />
+               <meta http-equiv="Content-Type" content="text/html;charset=utf-8" />
+
+               <link rel="stylesheet" type="text/css" href="assets/manual.css" />
+    <script src='http://charm.cs.illinois.edu/codemirror/lib/codemirror.js' type='text/javascript'></script>
+    <script src='http://charm.cs.illinois.edu/codemirror/mode/clike/clike.js' type='text/javascript'></script>
+    <script src='http://charm.cs.illinois.edu/codemirror/lib/util/runmode.js' type='text/javascript'></script>
+    <script src='assets/manual.js' type='text/javascript'></script>
+
diff --git a/doc/assets/manual.css b/doc/assets/manual.css
new file mode 100644 (file)
index 0000000..b986602
--- /dev/null
@@ -0,0 +1,136 @@
+/* Century Schoolbook font is very similar to Computer Modern Math: cmmi */
+.MATH    { font-family: "Century Schoolbook", serif; }
+.MATH I  { font-family: "Century Schoolbook", serif; font-style: italic }
+.BOLDMATH { font-family: "Century Schoolbook", serif; font-weight: bold }
+
+/* implement both fixed-size and relative sizes */
+SMALL.XTINY            { font-size : xx-small }
+SMALL.TINY             { font-size : x-small  }
+SMALL.SCRIPTSIZE       { font-size : smaller  }
+SMALL.FOOTNOTESIZE     { font-size : small    }
+SMALL.SMALL            {  }
+BIG.LARGE              {  }
+BIG.XLARGE             { font-size : large    }
+BIG.XXLARGE            { font-size : x-large  }
+BIG.HUGE               { font-size : larger   }
+BIG.XHUGE              { font-size : xx-large }
+
+h1             { color: #7b2e2e; margin: 2px auto; text-align:center; }
+h2             { color: #7b2e2e; padding: 8pt; }
+
+/* Remove the annoying underlines, but still keep it accessible */
+a { text-decoration: none; }
+a:hover { color:#a00; border-bottom: 1px dotted #a00; }
+
+
+
+/* mathematics styles */
+DIV.displaymath                { }     /* math displays */
+TD.eqno                        { }     /* equation-number cells */
+
+
+/* document-specific styles come next */
+body { font-family: "Droid Sans", Arial, sans-serif; }
+h1, h2, h3 { font-family: Puritan, Verdana, Helvetica sans-serif; }
+code, pre { font-family: "Courier New" Courier monospace; }
+
+body {
+               background: #ffffff;
+               background: url(hatchbkgd.png) repeat;
+}
+
+#maincontainer {
+               background: #efefef; color: black;
+               margin:5em auto 20px;
+               width: 900px;
+               border: 1px solid #ccc;
+}
+
+               position: fixed;
+               top:0; left:0;
+               width: 100%;
+               text-align: left;
+               border-bottom: 1px #555555 solid;
+               background: #dfdfdf;
+               font-size: 80%;
+}
+
+ul.manual-toc {
+               -moz-column-width: 32em;
+               -webkit-column-width: 32em;
+               display: none;
+}
+
+ul.manual-toc > li { font-weight: bold; }
+ul.manual-toc > li > ul { font-weight: normal; }
+ul.manual-toc,
+ul.manual-toc ul { list-style-type: none; }
+ul.manual-toc li a { color: #000; text-decoration: none; }
+ul.manual-toc li a:hover { color: #7b2e2e; }
+
+
+               display:inline-block;
+               margin: 5px;
+               float: right;
+}
+
+               display: inline;
+               margin: 0px 1em;
+}
+
+#nav-quicklinks a { text-decoration: none; }
+#nav-quicklinks a:hover { color: #7b2e2e; }
+
+.navsymbol {
+               font-weight:bold;
+               font-size: 130%;
+               line-height:70%;
+}
+
+div.manualtitle        {
+               display: inline-block;
+               margin: 5px;
+               text-align: center;
+               font-weight: bold;
+}
+
+#pulldowntab {
+       position: absolute;
+       left: 50%; top: 0.75em;
+       color: #999;
+       -webkit-transform: rotate(90deg);
+       -moz-transform: rotate(90deg);
+       -o-transform: rotate(90deg);
+       writing-mode: tb-rl;
+}
+
+pre {
+               background: #262626;
+               color: #d9bf8c;
+               margin: 10px auto;
+               display: block;
+               width: 650px;
+               overflow-x: auto;
+               font-size: 110%;
+}
+
+span.textit            { font-style: italic  }
+span.textsl            { font-style: italic  }
+span.arabic            {   }
+span.textbf            { font-weight: bold  }
+span.textsf            { font-style: italic  }
+
diff --git a/doc/assets/manual.js b/doc/assets/manual.js
new file mode 100644 (file)
index 0000000..56262a4
--- /dev/null
@@ -0,0 +1,16 @@
+// Grab all code snippets and paint it to highlight syntax
+$(document).ready( function() { +$("pre code").each( function(idx) {
+        CodeMirror.runMode($(this).text(), "text/x-charm++",$(this).get(0));
+    })
+    .children("span.cm-charmkeyword").css("color", "#dd5ef3");
+
+    $(".navigation") + .append('<span id="pulldowntab" class="navsymbol">&raquo;</span>') + .click( function() {$("ul.manual-toc").fadeToggle(); $("#pulldowntab").toggle(); } ) + .mouseleave( function() {$("ul.manual-toc").fadeOut('slow'); $("#pulldowntab").fadeIn('slow'); } ) + .css('cursor','pointer'); + +} ) + index a70667042f09cfbcf7f532fb94cf1de271b4aeb7..88567d7f99427fe9e3d7c913ea74db705ad681a5 100644 (file) @@ -4,6 +4,7 @@ TEX =$(FILE).tex bignetsim.tex install.tex interconnects.tex usage.tex

DEST   = bignetsim
LATEX2HTML     = $(L2H) -split 5 +DOCTITLE = 'BigSimulator (BigNetSim) Parallel Simulator Manual' include ../Makefile.common diff --git a/doc/bignetsim/title.html b/doc/bignetsim/title.html deleted file mode 100644 (file) index f1c39ed..0000000 +++ /dev/null @@ -1 +0,0 @@ -BigSimulator (BigNetSim) Parallel Simulator Manual index 23531f160e9c1d0794ab98796163bc756817de0d..9e3d9004cabd7c3969e1b62eaa2544ae7ba22a0d 100644 (file) @@ -4,6 +4,7 @@ TEX =$(FILE).tex install.tex emulator.tex bgapi.tex interpolation.tex

DEST   = bigsim
LATEX2HTML     = $(L2H) -split 5 +DOCTITLE = 'BigSim Parallel Emulator Manual' include ../Makefile.common diff --git a/doc/bigsim/title.html b/doc/bigsim/title.html deleted file mode 100644 (file) index 86e939d..0000000 +++ /dev/null @@ -1 +0,0 @@ -BigSim Parallel Emulator Manual index 55986477738bc23de47a4a9cb0d58bedde86c556..a949aa60b1c5864e82e819fdeab5f9c296d553fe 100644 (file) @@ -3,6 +3,7 @@ FILE=manual TEX=$(FILE).tex
DEST=charisma
LATEX2HTML=$(L2H) -split 4 +DOCTITLE='Charisma Manual' PROJECT_LINK='<a href="http://charm.cs.uiuc.edu/research/orch">Charisma Homepage</a><br>' include ../Makefile.common diff --git a/doc/charisma/title.html b/doc/charisma/title.html deleted file mode 100644 (file) index 17e93f2..0000000 +++ /dev/null @@ -1 +0,0 @@ -Charisma Manual index 40a3dae3a26f79f7cc899c27ae3a1888d9faf128..5a25486f5eaa73295d22cf986e2410b8bc401987 100644 (file) @@ -1,13 +1,15 @@ # Stub makefile for LaTeX PPL manual FILE=manual TEX=$(FILE).tex arrays.tex callbacks.tex chares.tex commlib.tex delegation.tex \
-       entry.tex further.tex groups.tex inhertmplt.tex index.tex intro.tex \
+       entry.tex groups.tex inhertmplt.tex index.tex intro.tex \
marshalling.tex messages.tex modules.tex nodegroups.tex othercalls.tex \
-       overview.tex pup.tex quiesce.tex readonly.tex reductions.tex sdag.tex \
-       python.tex alltoall.tex
+       python.tex alltoall.tex history.tex \
+       ../projections/tracing.tex
DEST=charm++
LATEX2HTML=$(L2H) -split 5 +DOCTITLE = 'Charm++ Manual' include ../Makefile.common diff --git a/doc/charm++/advancedarrays.tex b/doc/charm++/advancedarrays.tex new file mode 100644 (file) index 0000000..c715b56 --- /dev/null @@ -0,0 +1,482 @@ +The basic array features described previously (creation, messaging, +broadcasts, and reductions) are needed in almost every +\charmpp{} program. The more advanced techniques that follow +are not universally needed, but represent many useful optimisations. + +\section{Local Access} + +\index{ckLocal for arrays} +\label{ckLocal for arrays} +It is possible to get direct access to a local array element using the +proxy's \kw{ckLocal} method, which returns an ordinary \CC\ pointer +to the element if it exists on the local processor, and NULL if +the element does not exist or is on another processor. + +\begin{alltt} +A1 *a=a1[i].ckLocal(); +if (a==NULL) //...is remote-- send message +else //...is local-- directly use members and methods of a +\end{alltt} + +Note that if the element migrates or is deleted, any pointers +obtained with \kw{ckLocal} are no longer valid. It is best, +then, to either avoid \kw{ckLocal} or else call \kw{ckLocal} +each time the element may have migrated; e.g., at the start +of each entry method. + +An example of this usage is available +in \examplerefdir{topology/matmul3d}. + +\section{Advanced Array Creation} +\label{advanced array create} + +There are several ways to control the array creation process. +You can adjust the map and bindings before creation, change +the way the initial array elements are created, create elements +explicitly during the computation, and create elements implicitly, +on demand''. + +You can create all of an arrays elements using any one of these methods, +or create different elements using different methods. +An array element has the same syntax and semantics no matter +how it was created. + + + +\subsection{Configuring Array Characteristics Using CkArrayOptions} +\index{CkArrayOptions} +\label{CkArrayOptions} + +The array creation method \kw{ckNew} actually takes a parameter +of type \kw{CkArrayOptions}. This object describes several +optional attributes of the new array. + +The most common form of \kw{CkArrayOptions} is to set the number +of initial array elements. A \kw{CkArrayOptions} object will be +constructed automatically in this special common case. Thus +the following code segments all do exactly the same thing: + +\begin{alltt} +//Implicit CkArrayOptions + a1=CProxy_A1::ckNew(\uw{parameters},nElements); + +//Explicit CkArrayOptions + a1=CProxy_A1::ckNew(\uw{parameters},CkArrayOptions(nElements)); + +//Separate CkArrayOptions + CkArrayOptions opts(nElements); + a1=CProxy_A1::ckNew(\uw{parameters},opts); +\end{alltt} + + +Note that the numElements'' in an array element is simply the +numElements passed in when the array was created. The true number of +array elements may grow or shrink during the course of the +computation, so numElements can become out of date. This bulk'' +constructor approach should be preferred where possible, especially +for large arrays. Bulk construction is handled via a broadcast which +will be significantly more efficient in the number of messages +required than inserting each element individually, which will require +one message send per element. + +Examples of bulk construction are commonplace, see +\examplerefdir{jacobi3d-sdag} +for a demonstration of the slightly more complicated case of +multidimensional chare array bulk construction. + +\kw{CkArrayOptions} contains a few flags that the runtime can use to +optimize handling of a given array. If the array elements will only +migrate at controlled points (such as periodic load balancing with +{\tt AtASync()}), this is signalled to the runtime by calling {\tt + opts.setAnytimeMigration(false)}\footnote{At present, this optimizes +broadcasts to not save old messages for immigrating chares.}. If all +array elements will be inserted by bulk creation or by {\tt + fooArray[x].insert()} calls, signal this by calling {\tt + opts.setStaticInsertion(true)} \footnote{This can enable a slightly + faster default mapping scheme.}. + +\subsection{Initial Placement Using Map Objects} +\index{array map} +\label{array map} + +You can use \kw{CkArrayOptions} to specify a map object'' for an +array. The map object is used by the array manager to determine the +home'' PE of each element. The home PE is the PE upon which it is +initially placed, which will retain responsibility for maintaining the +location of the element. + +There is a default map object, which maps 1D array indices +in a block fashion to processors, and maps other array +indices based on a hash function. Some other mappings such as round-robin +(\kw{RRMap}) also exist, which can be used +similar to custom ones described below. + +A custom map object is implemented as a group which inherits from +\kw{CkArrayMap} and defines these virtual methods: + +\begin{alltt} +class CkArrayMap : public Group +\{ +public: + //... + + //Return an arrayHdl'', given some information about the array + virtual int registerArray(CkArrayIndex& numElements,CkArrayID aid); + //Return the home processor number for this element of this array + virtual int procNum(int arrayHdl,const CkArrayIndex &element); +\} +\end{alltt} + +For example, a simple 1D blockmapping scheme. Actual mapping is +handled in the procNum function. + +\begin{alltt} +class BlockMap : public CkArrayMap +\{ + public: + BlockMap(void) \{\} + BlockMap(CkMigrateMessage *m)\{\} + int registerArray(CkArrayIndex& numElements,CkArrayID aid) \{ + return 0; + \} + int procNum(int /*arrayHdl*/,const CkArrayIndex &idx) \{ + int elem=*(int *)idx.data(); + int penum = (elem/(32/CkNumPes())); + return penum; + \} +\}; + +\end{alltt} + +Note that the first argument to the \kw{procNum} method exists for reasons +internal to the runtime system and is not used in the calculation of processor +numbers. + +Once you've instantiated a custom map object, you can use it to +control the location of a new array's elements using the +\kw{setMap} method of the \kw{CkArrayOptions} object described above. +For example, if you've declared a map object named BlockMap'': + +\begin{alltt} +//Create the map group + CProxy_BlockMap myMap=CProxy_BlockMap::ckNew(); +//Make a new array using that map + CkArrayOptions opts(nElements); + opts.setMap(myMap); + a1=CProxy_A1::ckNew(\uw{parameters},opts); +\end{alltt} + +An example which contructs one element per physical node may be found in +\examplerefdir{PUP/pupDisk} + +Other 3D Torus network oriented map examples are in +\examplerefdir{topology} + +\subsection{Initial Elements} +\index{array initial} +\label{array initial} + +The map object described above can also be used to create +the initial set of array elements in a distributed fashion. +An array's initial elements are created by its map object, +by making a call to \kw{populateInitial} on each processor. + +You can create your own set of elements by creating your +own map object and overriding this virtual function of \kw{CkArrayMap}: + +\begin{alltt} + virtual void populateInitial(int arrayHdl,int numInitial, + void *msg,CkArrMgr *mgr) +\end{alltt} + +In this call, \kw{arrayHdl} is the value returned by \kw{registerArray}, +\kw{numInitial} is the number of elements passed to \kw{CkArrayOptions}, +\kw{msg} is the constructor message to pass, and \kw{mgr} is the +array to create. + +\kw{populateInitial} creates new array elements using the method +\kw{void CkArrMgr::insertInitial(CkArrayIndex idx,void *ctorMsg)}. +For example, to create one row of 2D array elements on each processor, +you would write: + +\begin{alltt} +void xyElementMap::populateInitial(int arrayHdl,int numInitial, + void *msg,CkArrMgr *mgr) +\{ + if (numInitial==0) return; //No initial elements requested + + //Create each local element + int y=CkMyPe(); + for (int x=0;x<numInitial;x++) \{ + mgr->insertInitial(CkArrayIndex2D(x,y),CkCopyMsg(&msg)); + \} + mgr->doneInserting(); + CkFreeMsg(msg); +\} +\end{alltt} + +Thus calling \kw{ckNew(10)} on a 3-processor machine would result in +30 elements being created. + + +\subsection{Bound Arrays} +\index{bound arrays} \index{bindTo} +\label{bound arrays} + +You can bind'' a new array to an existing array +using the \kw{bindTo} method of \kw{CkArrayOptions}. Bound arrays +act like separate arrays in all ways except for migration-- +corresponding elements of bound arrays always migrate together. +For example, this code creates two arrays A and B which are +bound together-- A[i] and B[i] will always be on the same processor. + +\begin{alltt} +//Create the first array normally + aProxy=CProxy_A::ckNew(\uw{parameters},nElements); +//Create the second array bound to the first + CkArrayOptions opts(nElements); + opts.bindTo(aProxy); + bProxy=CProxy_B::ckNew(\uw{parameters},opts); +\end{alltt} + +An arbitrary number of arrays can be bound together-- +in the example above, we could create yet another array +C and bind it to A or B. The result would be the same +in either case-- A[i], B[i], and C[i] will always be +on the same processor. + +There is no relationship between the types of bound arrays-- +it is permissible to bind arrays of different types or of the +same type. It is also permissible to have different numbers +of elements in the arrays, although elements of A which have +no corresponding element in B obey no special semantics. +Any method may be used to create the elements of any bound +array. + +Bound arrays are often useful if A[i] and B[i] perform different +aspects of the same computation, and thus will run most efficiently +if they lie on the same processor. Bound array elements are guaranteed +to always be able to interact using \kw{ckLocal} (see +section~\ref{ckLocal for arrays}), although the local pointer must +be refreshed after any migration. This should be done during the \kw{pup} +routine. When migrated, all elements that are bound together will be created +at the new processor before \kw{pup} is called on any of them, ensuring that +a valid local pointer to any of the bound objects can be obtained during the +\kw{pup} routine of any of the others. + +For example, an array {\it Alibrary} is implemented as a library module. +It implements a certain functionality by operating on a data array {\it dest} +which is just a pointer to some user provided data. +A user defined array {\it UserArray} is created and bound to +the array {\it Alibrary} to take advanatage of the functionality provided +by the library. +When bound array element migrated, the {\it data} pointer in {\it UserArray} +is re-allocated in {\it pup()}, thus {\it UserArray} is responsible to refresh +the pointer {\it dest} stored in {\it Alibrary}. + +\begin{alltt} +class Alibrary: public CProxy_Alibrary \{ +public: + ... + void set_ptr(double *ptr) \{ dest = ptr; \} + virtual void pup(PUP::er &p); +private: + double *dest; // point to user data in user defined bound array +\}; + +class UserArray: public CProxy_UserArray \{ +public: + virtual void pup(PUP::er &p) \{ + p|len; + if(p.isUnpacking()) \{ + data = new double[len]; + Alibrary *myfellow = AlibraryProxy(thisIndex).ckLocal(); + myfellow->set_ptr(data); // refresh data in bound array + \} + p(data, len); + \} +private: + CProxy_Alibrary AlibraryProxy; // proxy to my bound array + double *data; // user allocated data pointer + int len; +\}; +\end{alltt} + +A demonstration of bound arrays can be found in +\testrefdir{startupTest} + + +\subsection{Dynamic Insertion} +\label{dynamic_insertion} + +In addition to creating initial array elements using ckNew, +you can also +create array elements during the computation. + +You insert elements into the array by indexing the proxy +and calling insert. The insert call optionally takes +parameters, which are passed to the constructor; and a +processor number, where the element will be created. +Array elements can be inserted in any order from +any processor at any time. Array elements need not +be contiguous. + +If using \kw{insert} to create all the elements of the array, +you must call \kw{CProxy\_Array::doneInserting} before using +the array. + +\begin{alltt} +//In the .C file: +int x,y,z; +CProxy_A1 a1=CProxy_A1::ckNew(); //Creates a new, empty 1D array +for (x=...) \{ + a1[x ].insert(\uw{parameters}); //Bracket syntax + a1(x+1).insert(\uw{parameters}); // or equivalent parenthesis syntax +\} +a1.doneInserting(); + +CProxy_A2 a2=CProxy_A2::ckNew(); //Creates 2D array +for (x=...) for (y=...) + a2(x,y).insert(\uw{parameters}); //Can't use brackets! +a2.doneInserting(); + +CProxy_A3 a3=CProxy_A3::ckNew(); //Creates 3D array +for (x=...) for (y=...) for (z=...) + a3(x,y,z).insert(\uw{parameters}); +a3.doneInserting(); + +CProxy_AF aF=CProxy_AF::ckNew(); //Creates user-defined index array +for (...) \{ + aF[CkArrayIndexFoo(...)].insert(\uw{parameters}); //Use brackets... + aF(CkArrayIndexFoo(...)).insert(\uw{parameters}); // ...or parenthesis +\} +aF.doneInserting(); + +\end{alltt} + +The \kw{doneInserting} call starts the reduction manager (see Array +Reductions'') and load balancer (see ~\ref{lbFramework})-- since +these objects need to know about all the array's elements, they +must be started after the initial elements are inserted. +You may call \kw{doneInserting} multiple times, but only the first +call actually does anything. You may even \kw{insert} or \kw{destroy} +elements after a call to \kw{doneInserting}, with different semantics-- +see the reduction manager and load balancer sections for details. + +If you do not specify one, the system will choose a processor to +create an array element on based on the current map object. + +A demonstration of dynamic insertion is available: +\examplerefdir{hello/fancyarray} + +\subsection{Demand Creation} + +Demand Creation is a specialized form of dynamic insertion. Normally, invoking an entry method on a nonexistant array +element is an error. But if you add the attribute +\index{createhere} \index{createhome} +\kw{[createhere]} or \kw{[createhome]} to an entry method, + the array manager will +demand create'' a new element to handle the message. + +With \kw{[createhome]}, the new element +will be created on the home processor, which is most efficient when messages for +the element may arrive from anywhere in the machine. With \kw{[createhere]}, +the new element is created on the sending processor, which is most efficient +if when messages will often be sent from that same processor. + +The new element is created by calling its default (taking no +parameters) constructor, which must exist and be listed in the .ci file. +A single array can have a mix of demand-creation and +classic entry methods; and demand-created and normally +created elements. + +A simple example of demand creation +\testrefdir{demand\_creation} + +\section{User-defined Array Indices} +\label{user-defined array index type} +\index{Array index type, user-defined} + +\charmpp{} array indices are arbitrary collections of integers. +To define a new array index, you create an ordinary C++ class +which inherits from \kw{CkArrayIndex} and sets the nInts'' member +to the length, in integers, of the array index. + +For example, if you have a structure or class named Foo'', you +can use a \uw{Foo} object as an array index by defining the class: + +\begin{alltt} +#include <charm++.h> +class CkArrayIndexFoo:public CkArrayIndex \{ + Foo f; +public: + CkArrayIndexFoo(const Foo \&in) + \{ + f=in; + nInts=sizeof(f)/sizeof(int); + \} + //Not required, but convenient: cast-to-foo operators + operator Foo &() \{return f;\} + operator const Foo &() const \{return f;\} +\}; +\end{alltt} + +Note that \uw{Foo}'s size must be an integral number of integers-- +you must pad it with zero bytes if this is not the case. +Also, \uw{Foo} must be a simple class-- it cannot contain +pointers, have virtual functions, or require a destructor. +Finally, there is a \charmpp\ configuration-time option called +CK\_ARRAYINDEX\_MAXLEN \index{CK\_ARRAYINDEX\_MAXLEN} +which is the largest allowable number of +integers in an array index. The default is 3; but you may +override this to any value by passing -DCK\_ARRAYINDEX\_MAXLEN=n'' +to the \charmpp\ build script as well as all user code. Larger +values will increase the size of each message. + +You can then declare an array indexed by \uw{Foo} objects with + +\begin{alltt} +//in the .ci file: +array [Foo] AF \{ entry AF(); ... \} + +//in the .h file: +class AF : public CBase\_AF +\{ public: AF() \{\} ... \} + +//in the .C file: + Foo f; + CProxy_AF a=CProxy_AF::ckNew(); + a[CkArrayIndexFoo(f)].insert(); + ... +\end{alltt} + +Note that since our CkArrayIndexFoo constructor is not declared +with the explicit keyword, we can equivalently write the last line as: + +\begin{alltt} + a[f].insert(); +\end{alltt} + +When you implement your array element class, as shown above you +can inherit from \kw{CBase}\_\uw{ClassName}, +a class templated by the index type \uw{Foo}. In the old syntax, +you could also inherit directly from \kw{ArrayElementT}. +The array index (an object of type \uw{Foo}) is then accessible as +thisIndex''. For example: + +\begin{alltt} + +//in the .C file: +AF::AF() +\{ + Foo myF=thisIndex; + functionTakingFoo(myF); +\} +\end{alltt} + +A demonstration of user defined indices can be seen in +\examplerefdir{hello/fancyarray} + +%\section{Load Balancing Chare Arrays} +%see section~\ref{lbFramework} + index 52fb60a6b17b850cd85efc330c16a4b46aefcb0d..8a8d602ee1f8376e32d0744327a724a72cba4676 100644 (file) +\section{Load Balancing Simulation} + +The simulation feature of the load balancing framework allows the users to collect information +about the compute WALL/CPU time and communication of the chares during a particular run of +the program and use this information later to test the different load balancing strategies to +see which one is suitable for the program behavior. Currently, this feature is supported only for +the centralized load balancing strategies. For this, the load balancing framework +accepts the following command line options: +\begin{enumerate} +\item {\em +LBDump StepStart}\\ + This will dump the compute and the communication data collected by the load balancing framework + starting from the load balancing step {\em StepStart} into a file on the disk. The name of the file + is given by the {\em +LBDumpFile} option. The load balancing step in the + program is numbered starting from 0. Negative value for {\em StepStart} will be converted to 0. +\item {\em +LBDumpSteps StepsNo}\\ + This option specifies the number of load balancing steps for which data will be dumped to disk. + If omitted, its default value is 1. The program will exit after {\em StepsNo} files are created. +\item {\em +LBDumpFile FileName}\\ + This option specifies the base name of the file created with the load balancing data. If this + option is not specified, the framework uses the default file {\tt lbdata.dat}. Since multiple steps are allowed, + a number correspoding to the step number is appended to the filename in the form {\tt Filename.\#}; + this applies to both dump and simulation. +\item {\em +LBSim StepStart}\\ + This option instructs the framework to do the simulation starting from {\em StepStart} step. + When this option is specified, the load balancing data along with the step + number will be read from the file specified in the {\em +LBDumpFile} + option. The program will print the results of the balancing for a number of steps given + by the {\em +LBSimSteps} option, and then will exit. +\item {\em +LBSimSteps StepsNo}\\ + This option is applicable only to the simulation mode. It specifies the number of + load balancing steps for which the data will be dumped. The default value is 1. +\item {\em +LBSimProcs}\\ + With this option, the user can change the number of processors + specified to the load balancing strategy. It may be used to test + the strategy in the cases where some processor crashes or a new processor becomes available. If this number is not + changed since the original run, starting from the second step file, the program will print other additional + information about how the simulated load differs from the real load during the run (considering all + strategies that were applied while running). This may be used to test the validity of a load balancer + prediction over the reality. If the strategies used during run and simulation differ, the additional data + printed may not be useful. +\end{enumerate} +Here is an example which collects the data for a 1000 processor run of a program +\begin{alltt} +./charmrun pgm +p1000 +balancer RandCentLB +LBDump 2 +LBDumpSteps 4 +LBDumpFile lbsim.dat +\end{alltt} +This will collect data on files lbsim.dat.{2,3,4,5}. We can use this data to +analyze the performance of various centralized strategies using: +\begin{alltt} +./charmrun pgm +balancer <Strategy to test> +LBSim 2 +LBSimSteps 4 +LBDumpFile lbsim.dat +[+LBSimProcs 900] +\end{alltt} +Please note that this does not invoke the real application. In fact, + ''pgm'' can be replaced with any generic application which calls centralized load balancer. +An example can be found in \testrefdir{load\_balancing/lb\_test}. + +\section{Future load predictor} + +When objects do not follow the assumption that the future workload will be the +same as the past, the load balancer might not have the right information to do +a good rebalancing job. To prevent this, the user can provide a transition +function to the load balancer to predict what will be the future workload, given +the past instrumented one. For this, the user can provide a specific class +which inherits from {\tt LBPredictorFunction} and implement the appropriate functions. +Here is the abstract class: +\begin{alltt} +class LBPredictorFunction \{ +public: + int num_params; + virtual void initialize_params(double *x); -\subsection{Advanced Load Balancing} - -\label{advancedlb} - -\subsubsection{Control CPU Load Statistics} - -Charm++ programmers can control CPU load data in the load balancing database -before a load balancing phase is started (which is the time when load balancing + virtual double predict(double x, double *params) =0; + virtual void print(double *params) {PredictorPrintf("LB: unknown model");}; + virtual void function(double x, double *param, double &y, double *dyda) =0; +\}; +\end{alltt} +\begin{itemize} +\item {\tt initialize\_params} by default initializes the parameters randomly. If the user +knows how they should be, this function can be re-implemented. +\item {\tt predict} is the function that predicts the future load based on the function parameters. +An example for the {\em predict} function is given below. +\begin{verbatim} +double predict(double x, double *param) {return (param[0]*x + param[1]);} +\end{verbatim} +\item {\tt print} is useful for debugging and it can be re-implemented to have a meaningful +print of the learnt model +\item {\tt function} is a function internally needed to learn the parameters, {\tt x} and +{\tt param} are input, {\tt y} and {\tt dyda} are output (the computed function and +all its derivatives with respect to the parameters, respectively). +For the function in the example should look like: +\begin{verbatim} +void function(double x, double *param, double &y, double *dyda) { + y = predict(x, param); + dyda[0] = x; + dyda[1] = 1; +} +\end{verbatim} +\end{itemize} +Other than these functions, the user should provide a constructor which must initialize +{\tt num\_params} to the number of parameters the model has to learn. This number is +the dimension of {\tt param} and {\tt dyda} in the previous functions. For the given +example, the constructor is {\tt \{num\_params = 2;\}}. + +If the model for computation is not known, the user can leave the system to +use the default function. + +As seen, the function can have several parameters which will be learned during +the execution of the program. For this, user can be add the following command +line arguments to specify the learning behavior: +\begin{enumerate} +\item {\em +LBPredictorWindow size}\\ +This parameter specifies the number of statistics steps the load balancer will +store. The greater this number is, the better the +approximation of the workload will be, but more memory is required to store +the intermediate information. The default is 20. +\item {\em +LBPredictorDelay steps}\\ +This will tell how many load balancer steps to wait before considering the +function parameters learnt and starting to use the mode. The load balancer will +collect statistics for a {\em +LBPredictorWindow} steps, but it will start using +the model as soon as {\em +LBPredictorDelay} information are collected. The +default is 10. +\end{enumerate} +Moreover, another flag can be set to enable the predictor from command line: {\em ++LBPredictor}.\\ +Other than the command line options, there are some methods +which can be called from the user program to modify the predictor. These methods are: +\begin{itemize} +\item {\tt void PredictorOn(LBPredictorFunction *model);} +\item {\tt void PredictorOn(LBPredictorFunction *model,int window);} +\item {\tt void PredictorOff();} +\item {\tt void ChangePredictor(LBPredictorFunction *model);} +\end{itemize} + +An example can be found in \testrefdir{load\_balancing/lb\_test/predictor}. +\section{Control CPU Load Statistics} + +\charmpp{} programmers can modify the CPU load data in the load balancing database +before a load balancing phase starts (which is the time when load balancing database is collected and used by load balancing strategies). In an array element, the following function can be invoked to overwrite the -CPU load that is measured by load balancing framework. +CPU load that is measured by the load balancing framework. \begin{alltt} double newTiming; @@ -21,7 +150,7 @@ CPU load that is measured by load balancing framework. the superclass of all array elements. The users can also retrieve the current timing that the load balancing runtime -has measured for the current array element. +has measured for the current array element using {\em getObjTime()} \begin{alltt} double measuredTiming; @@ -31,44 +160,42 @@ has measured for the current array element. This is useful when the users want to derive a new CPU load based on the existing one. -\subsubsection{Model-based Load Balancing} +\section{Model-based Load Balancing} -Charm++ programmers can also choose to feed load balancer with their own CPU -timing of each Chare based on certain computational model of the applications. +The user can choose to feed load balancer with their own CPU +timing for each Chare based on certain computational model of the applications. -To do so, first turn off automatic CPU load measurement completely -by setting: +To do so, in the array element's constructor, the user first needs to turn off +automatic CPU load measurement completely by setting \begin{alltt} usesAutoMeasure = CmiFalse; \end{alltt} -in array element's constructor. - -Then the users need to implement the following function to the chare array +The user must also implement the following function to the chare array classes: \begin{alltt} virtual void CkMigratable::UserSetLBLoad(); // defined in base class \end{alltt} -This function served as a callback that is called on each chare object when +This function serves as a callback that is called on each chare object when {\em AtSync()} is called and ready to do load balancing. The implementation of -{\em UserSetLBLoad()} is simply to set the current chare object's CPU load to -load balancer framework. {\em setObjTime()} described above can be used for +{\em UserSetLBLoad()} is simply to set the current chare object's CPU load in +load balancing framework. {\em setObjTime()} described above can be used for this. -\subsubsection{Writing a communication-aware load balancing strategy} +\section{Writing a new load balancing strategy} -Charm++ programmers can choose an existing load balancing strategy from -Charm++'s built-in strategies(see ~\ref{lbStrategy}) for the best performance +\charmpp{} programmers can choose an existing load balancing strategy from +\charmpp{}'s built-in strategies(see ~\ref{lbStrategy}) for the best performance based on the characteristics of their applications. However, they can also choose to write their own load balancing strategies. -The Charm++ load balancing framework provides a simple scheme to incorporate +The \charmpp{} load balancing framework provides a simple scheme to incorporate new load balancing strategies. The programmer needs to write their strategy for -load balancing based on a instrumented ProcArray and ObjGraph provided by the -load balancing framework. This strategy is to be incorporated within this +load balancing based on the instrumented ProcArray and ObjGraph provided by the +load balancing framework. This strategy is implemented within this function: \begin{alltt} @@ -90,9 +217,9 @@ void FooLB::work(LDStats *stats) \{ \end{alltt} Figure~\ref{fig:ckgraph} explains the two data structures available to the -strategy: ProcArray and ObjGraph. Using these, the strategy should assign new -processors for objects it wants to be migrated through the setNewPe() method. - +strategy: ProcArray and ObjGraph. Using them, the strategy should assign objects +to new processors where it wants to be migrated through the setNewPe() method. +{\tt src/ck-ldb/GreedyLB.C} can be referred. \begin{figure}[h] \centering \includegraphics[width=6.0in]{fig/ckgraph.png} @@ -101,18 +228,18 @@ balancing strategy} \label{fig:ckgraph} \end{figure} -Incorporating this strategy into the Charm++ build framework is explained in +Incorporating this strategy into the \charmpp{} build framework is explained in the next section. -\subsubsection{Adding a load balancer to Charm++} +\section{Adding a load balancer to \charmpp{}} Let us assume that we are writing a new centralized load balancer called FooLB. -The next few steps explain the addition of the load balancer to the Charm++ +The next few steps explain the steps of adding the load balancer to the \charmpp{} build system: \begin{enumerate} -\item Create files named {\em FooLB.ci, FooLB.h and FooLB.C}. One can choose to -copy and rename the files GraphPartLB.* and rename the class name in those +\item Create files named {\em FooLB.ci, FooLB.h and FooLB.C} in directory of {\tt src/ck-ldb}. +One can choose to copy and rename the files GraphPartLB.* and rename the class name in those files. \item Implement the strategy in the {\em FooLB} class method --- {\bf @@ -131,22 +258,21 @@ link time, you also need to create the dependency file called libmoduleFooLB.dep. Run the script in charm/tmp, which creates the new Makefile named Make.lb''. -\item Run make depends'' to update dependence rule of Charm++ files. And run -make charm++'' to compile Charm++ which includes the new load balancing +\item Run make depends'' to update dependence rule of \charmpp{} files. And run +make charm++'' to compile \charmpp{} which includes the new load balancing strategy files. \end{enumerate} -\subsubsection{Understand Load Balancing Database Data Structure} - +\section{Understand Load Balancing Database Data Structure} \label{lbdatabase} -To write a load balancing strategy, one may want to know +To write a load balancing strategy, you need to know what information is measured during the runtime and how it is represented in -the load balancing database data structure? +the load balancing database data structure. There are mainly 3 categories of information: a) processor information including processor speed, background load; b) object information including per object -cpu/wallclock compute time and c) communication information . +CPU/WallClock compute time and c) communication information . The database data structure named {\kw LDStats} is defined in {\em CentralLB.h}: @@ -181,7 +307,7 @@ The database data structure named {\kw LDStats} is defined in {\em CentralLB.h}: \end{verbatim} \begin{enumerate} -\item {\em LBRealType} is the data type for load balancer measured time. It is "double" by default. User can specify the type to float if wanted at Charm++ compile time. For example, ./build charm++ net-linux-x86\_64 {-}{-}with-lbtime-type=float; +\item {\em LBRealType} is the data type for load balancer measured time. It is "double" by default. User can specify the type to float at \charmpp{} compile time if want. For example, ./build charm++ net-linux-x86\_64 {-}{-}with-lbtime-type=float; \item {\em procs} array defines processor attributes and usage data for each processor; \item {\em objData} array records per object information, {\em LDObjData} is defined in {\em lbdb.h}; diff --git a/doc/charm++/advancedpup.tex b/doc/charm++/advancedpup.tex new file mode 100644 (file) index 0000000..1e98196 --- /dev/null @@ -0,0 +1,528 @@ +This section describes advanced functionality in the PUP framework. +The first subsections describes features supporting complex objects, +with multiple levels of inheritance, or with dynamic changes in heap +usage. The latter subsections describe additional language bindings, +and features supporting PUP modes which can be used to copy object +state from and to long term storage for checkpointing, or other +application level purposes. + +\section{Dynamic Allocation} +\label{sec:pupdynalloc} + +If your class has fields that are dynamically allocated, when unpacking +these need to be allocated (in the usual way) before you pup them. +Deallocation should be left to the class destructor as usual. + +\subsection{No allocation} + +The simplest case is when there is no dynamic allocation. +\begin{alltt} +class keepsFoo : public mySuperclass \{ +private: + foo f; /* simple foo object*/ +public: + keepsFoo(void) \{ \} + void pup(PUP::er &p) \{ + mySuperclass::pup(p); + p|f; // pup f's fields (calls f.pup(p);) + \} + ~keepsFoo() \{ \} +\}; +\end{alltt} + +\subsection{Allocation outside pup} + +The next simplest case is when we contain a class +that is always allocated during our constructor, +and deallocated during our destructor. Then no allocation +is needed within the pup routine. +\begin{alltt} +class keepsHeapFoo : public mySuperclass \{ +private: + foo *f; /*Heap-allocated foo object*/ +public: + keepsHeapFoo(void) \{ + f=new foo; + \} + void pup(PUP::er &p) \{ + mySuperclass::pup(p); + p|*f; // pup f's fields (calls f->pup(p)) + \} + ~keepsHeapFoo() \{delete f;\} +\}; +\end{alltt} + +\subsection{Allocation during pup} + +If we need values obtained during the pup routine +before we can allocate the class, we must +allocate the class inside the pup routine. +Be sure to protect the allocation with if (p.isUnpacking())''. +\begin{alltt} +class keepsOneFoo : public mySuperclass \{ +private: + foo *f; /*Heap-allocated foo object*/ +public: + keepsOneFoo(...) \{f=new foo(...);\} + keepsOneFoo() \{f=NULL;\} /* pup constructor */ + void pup(PUP::er &p) \{ + mySuperclass::pup(p); + ... + if (p.isUnpacking()) /* must allocate foo now */ + f=new foo(...); + p|*f;//pup f's fields + \} + ~keepsOneFoo() \{delete f;\} +\}; +\end{alltt} + +\subsection{Allocatable array} + +For example, if we keep an array of doubles, +we need to know how many doubles there are +before we can allocate the array. Hence we must +first pup the array length, do our allocation, +and then pup the array data. We could allocate memory using +malloc/free or other allocators in exactly the same way. +\begin{alltt} +class keepsDoubles : public mySuperclass \{ +private: + int n; + double *arr;/*new'd array of n doubles*/ +public: + keepsDoubles(int n_) \{ + n=n_; + arr=new double[n]; + \} + keepsDoubles() \{ \} + + void pup(PUP::er &p) \{ + mySuperclass::pup(p); + p|n;//pup the array length n + if (p.isUnpacking()) arr=new double[n]; + PUParray(p,arr,n); //pup data in the array + \} + + ~keepsDoubles() \{delete[] arr;\} +\}; +\end{alltt} + +\subsection{NULL object pointer} + +If our allocated object may be NULL, our allocation +becomes much more complicated. We must first check +and pup a flag to indicate whether the object exists, +then depending on the flag, pup the object. +\begin{alltt} +class keepsNullFoo : public mySuperclass \{ +private: + foo *f; /*Heap-allocated foo object, or NULL*/ +public: + keepsNullFoo(...) \{ if (...) f=new foo(...);\} + keepsNullFoo() \{f=NULL;\} + void pup(PUP::er &p) \{ + mySuperclass::pup(p); + int has_f=(f!=NULL); + p|has_f; + if (has_f) \{ + if (p.isUnpacking()) f=new foo; + p|*f; + \} else \{ + f=NULL; + \} + \} + ~keepsNullFoo() \{delete f;\} +\}; +\end{alltt} + +This sort of code is normally much longer and more +error-prone if split into the various packing/unpacking cases. + +\subsection{Array of classes} + +An array of actual classes can be treated exactly the same way +as an array of basic types. PUParray will pup each +element of the array properly, calling the appropriate \verb.operator|.. +\begin{alltt} +class keepsFoos : public mySuperclass \{ +private: + int n; + foo *arr;/*new'd array of n foos*/ +public: + keepsFoos(int n_) \{ + n=n_; + arr=new foo[n]; + \} + keepsFoos() \{ arr=NULL; \} + + void pup(PUP::er &p) \{ + mySuperclass::pup(p); + p|n;//pup the array length n + if (p.isUnpacking()) arr=new foo[n]; + PUParray(p,arr,n); //pup each foo in the array + \} + + ~keepsFoos() \{delete[] arr;\} +\}; +\end{alltt} + + +\subsection{Array of pointers to classes} + +An array of pointers to classes must handle each element +separately, since the PUParray routine does not work with +pointers. An allocate'' routine to set up the array +could simplify this code. More ambitious is to construct +a smart pointer'' class that includes a pup routine. +\begin{alltt} +class keepsFooPtrs : public mySuperclass \{ +private: + int n; + foo **arr;/*new'd array of n pointer-to-foos*/ +public: + keepsFooPtrs(int n_) \{ + n=n_; + arr=new foo*[n]; // allocate array + for (int i=0;i<n;i++) arr[i]=new foo(...); // allocate i'th foo + \} + keepsFooPtrs() \{ arr=NULL; \} + + void pup(PUP::er &p) \{ + mySuperclass::pup(p); + p|n;//pup the array length n + if (p.isUnpacking()) arr=new foo*[n]; // allocate array + for (int i=0;i<n;i++) \{ + if (p.isUnpacking()) arr[i]=new foo(...); // allocate i'th foo + p|*arr[i]; //pup the i'th foo + \} + \} + + ~keepsFooPtrs() \{ + for (int i=0;i<n;i++) delete arr[i]; + delete[] arr; + \} +\}; +\end{alltt} + +Note that this will not properly handle the case where +some elements of the array are actually subclasses of foo, +with virtual methods. The PUP::able framework described +in the next section can be helpful in this case. + + +\section{Subclass allocation via PUP::able} + +\label{sec:pup::able} +If the class \uw{foo} above might have been a subclass, instead of +simply using \uw{new foo} above we would have had to allocate +an object of the appropriate subclass. Since determining the +proper subclass and calling the appropriate constructor yourself can be +difficult, the PUP framework provides a scheme for automatically +determining and dynamically allocating subobjects of the appropriate type. + +Your superclass must inherit from \kw{PUP::able}, which provides +the basic machinery used to move the class. +A concrete superclass and all its concrete subclasses require these +four features: + +\begin{itemize} +\item A line declaring \kw{PUPable \uw{className};} in the .ci file. +This registers the class's constructor. + +\item A call to the macro \kw{PUPable\_decl(\uw{className})} in the +class's declaration, in the header file. This adds a virtual +method to your class to allow \kw{PUP::able} to determine your class's type. + +\item A migration constructor---a constructor that takes \kw{CkMigrateMessage *}. +This is used to create the new object on the receive side, immediately +before calling the new object's \kw{pup} routine. + +\item A working, virtual \kw{pup} method. You can omit this if your +class has no data that needs to be packed. +\end{itemize} + +An abstract superclass---a superclass that will never actually be +packed---only needs to inherit from \kw{PUP::able} and include a +\kw{PUPable\_abstract(\uw{className})} macro in their body. For +these abstract classes, the +.ci file, \kw{PUPable\_decl} macro, and constructor are not needed. + +For example, if \uw{parent} is a concrete superclass and \uw{child} its +subclass, + +\begin{alltt} +//In the .ci file: + PUPable parent; + PUPable child; //Could also have said PUPable parent, child;'' + +//In the .h file: +class parent : public PUP::able \{ + ... data members ... +public: + ... other methods ... + parent() \{...\} + + //PUP::able support: decl, migration constructor, and pup + PUPable\_decl(parent); + parent(CkMigrateMessage *m) : PUP::able(m) \{\} + virtual void pup(PUP::er &p) \{ + PUP::able::pup(p);//Call base class + ... pup data members as usual ... + \} +\}; +class child : public parent \{ + ... more data members ... +public: ... more methods, possibly virtual ... + child() \{...\} + + //PUP::able support: decl, migration constructor, and pup + PUPable\_decl(child); + child(CkMigrateMessage *m) : parent(m) \{\} + virtual void pup(PUP::er &p) \{ + parent::pup(p);//Call base class + ... pup child's data members as usual ... + \} +\}; + +\end{alltt} + +With these declarations, then, we can automatically +allocate and pup a pointer to a parent or child +using the vertical bar \kw{PUP::er} syntax, which on the receive +side will create a new object of the appropriate type: + +\begin{alltt} +class keepsParent \{ + parent *obj; //May actually point to a child class (or be NULL) +public: + ... + ~keepsParent() \{ + delete obj; + \} + void pup(PUP::er &p) + \{ + p|obj; + \} +\}; +PUPmarshall(keepsParent); +\end{alltt} + +This will properly pack, allocate, and unpack obj whether +it is actually a parent or child object. The child class +can use all the usual \CC\ features, such as virtual functions +and extra private data. + +If obj is NULL when packed, it will be restored to NULL when unpacked. +For example, if the nodes of a binary tree are \kw{PUP::able}, +one may write a recursive pup routine for the tree quite easily: + +\begin{alltt} +// In the .ci file: + PUPable treeNode; + +// In the .h file +class treeNode : public PUP::able \{ + treeNode *left;//Left subtree + treeNode *right;//Right subtree + ... other fields ... +public: + treeNode(treeNode *l=NULL, treeNode *r=NULL); + ~treeNode() \{delete left; delete right;\} + + // The usual PUP::able support: + PUPable\_decl(treeNode); + treeNode(CkMigrateMessage *m) : PUP::able(m) \{ left=right=NULL; \} + void pup(PUP::er &p) \{ + PUP::able::pup(p);//Call base class + p|left; + p|right; + ... pup other fields as usual ... + \} +\}; +\end{alltt} + +This same implementation will also work properly even if the tree's +internal nodes are actually subclasses of treeNode. + +You may prefer to use the macros \kw{PUPable\_def(\uw{className})} +and \kw{PUPable\_reg(\uw{className})} rather than using \kw{PUPable} +in the .ci file. \kw{PUPable\_def} provides routine definitions used +by the \kw{PUP::able} machinery, and should be included in exactly one +source file at file scope. \kw{PUPable\_reg} registers this class +with the runtime system, and should be executed exactly once per node +during program startup. + +Finally, a \kw{PUP::able} superclass like \uw{parent} above +must normally be passed around via a pointer or reference, because the object +might actually be some subclass like \uw{child}. Because +pointers and references cannot be passed across processors, +for parameter marshalling you must use the special templated +smart pointer classes \kw{CkPointer} and \kw{CkReference}, +which only need to be listed in the .ci file. + +A \kw{CkReference} is a read-only reference to a \kw{PUP::able} object---it +is only valid for the duration of the method call. A \kw{CkPointer} +transfers ownership of the unmarshalled \kw{PUP::able} to the method, so the +pointer can be kept and the object used indefinitely. + +For example, if the entry method \uw{bar} needs a \kw{PUP::able} \uw{parent} +object for in-call processing, you would use a \kw{CkReference} like this: + +\begin{alltt} +// In the .ci file: + entry void barRef(int x,CkReference<parent> p); + +// In the .h file: + void barRef(int x,parent &p) \{ + // can use p here, but only during this method invocation + \} +\end{alltt} + +If the entry method needs to keep its parameter, use a \kw{CkPointer} like this: +\begin{alltt} +// In the .ci file: + entry void barPtr(int x,CkPointer<parent> p); + +// In the .h file: + void barPtr(int x,parent *p) \{ + // can keep this pointer indefinitely, but must eventually delete it + \} +\end{alltt} + +Both \kw{CkReference} and \kw{CkPointer} are read-only from the send +side---unlike messages, which are consumed when sent, the same object +can be passed to several parameter marshalled entry methods. +In the example above, we could do: + +\begin{alltt} + parent *p=new child; + someProxy.barRef(x,*p); + someProxy.barPtr(x,p); // Makes a copy of p + delete p; // We allocated p, so we destroy it. +\end{alltt} + + +\section{C and Fortran bindings} + +C and Fortran programmers can use a limited subset of the +\kw{PUP::er} capability. The routines all take a +handle named \kw{pup\_er}. The routines +have the prototype: +\begin{alltt} +void pup\_\kw{type}(pup\_er p,\kw{type} *val); +void pup\_\kw{type}s(pup\_er p,\kw{type} *vals,int nVals); +\end{alltt} +The first call is for use with a single element; +the second call is for use with an array. +The supported types are char, short, int, long, +uchar, ushort, uint, ulong, float, and double, +which all have the usual C meanings. + +A byte-packing routine +\begin{alltt} +void pup\_bytes(pup\_er p,void *data,int nBytes); +\end{alltt} +is also provided, but its use is discouraged +for cross-platform puping. + +\kw{pup\_isSizing}, \kw{pup\_isPacking}, \kw{pup\_isUnpacking}, +and \kw{pup\_isDeleting} calls are also available. +Since C and Fortran have no destructors, you should +actually deallocate all data when passed a deleting \kw{pup\_er}. + +C and Fortran users cannot use \kw{PUP::able} objects, +seeking, or write custom \kw{PUP::er}s. Using the \CC\ +interface is recommended. + + + +\section{Common PUP::ers} +\label{sec:PUP:CommonPUPers} +The most common \kw{PUP::er}s used are \kw{PUP::sizer}, +\kw{PUP::toMem}, and \kw{PUP::fromMem}. These are sizing, +packing, and unpacking \kw{PUP::er}s, respectively. + +\kw{PUP::sizer} simply sums up the sizes of the native +binary representation of the objects it is passed. +\kw{PUP::toMem} copies the binary representation of the +objects passed into a preallocated contiguous memory buffer. +\kw{PUP::fromMem} copies binary data from a contiguous memory +buffer into the objects passed. All three support the +\kw{size} method, which returns the number of bytes used +by the objects seen so far. + +Other common \kw{PUP::er}s are \kw{PUP::toDisk}, +\kw{PUP::fromDisk}, and \kw{PUP::xlater}. The first +two are simple filesystem variants of the \kw{PUP::toMem} +and \kw{PUP::fromMem} classes; \kw{PUP::xlater} translates +binary data from an unpacking PUP::er into the machine's +native binary format, based on a \kw{machineInfo} structure +that describes the format used by the source machine. + +An example of \kw{PUP::toDisk} is available in \examplerefdir{PUP/pupDisk} + +\section{PUP::seekBlock} + +It may rarely occur that you require items to be unpacked +in a different order than they are packed. That is, you +want a seek capability. \kw{PUP::er}s support a limited +form of seeking. + +To begin a seek block, create a \kw{PUP::seekBlock} object +with your current PUP::er and the number of sections'' to +create. Seek to a (0-based) section number +with the seek method, and end the seeking with the endBlock method. +For example, if we have two objects A and B, where A's pup +depends on and affects some object B, we can pup the two with: + +\begin{alltt} +void pupAB(PUP::er &p) +\{ + ... other fields ... + PUP::seekBlock s(p,2); //2 seek sections + if (p.isUnpacking()) + \{//In this case, pup B first + s.seek(1); + B.pup(p); + \} + s.seek(0); + A.pup(p,B); + + if (!p.isUnpacking()) + \{//In this case, pup B last + s.seek(1); + B.pup(p); + \} + s.endBlock(); //End of seeking block + ... other fields ... +\}; +\end{alltt} + +Note that without the seek block, A's fields would be unpacked +over B's memory, with disasterous consequences. +The packing or sizing path must traverse the seek sections +in numerical order; the unpack path may traverse them in any +order. There is currently a small fixed limit of 3 on the +maximum number of seek sections. + + +\section{Writing a PUP::er} + +System-level programmers may occasionally find it useful to define +their own \kw{PUP::er} objects. The system \kw{PUP::er} class is +an abstract base class that funnels all incoming pup requests +to a single subroutine: + +\begin{alltt} + virtual void bytes(void *p,int n,size\_t itemSize,dataType t); +\end{alltt} + +The parameters are, in order, the field address, the number of items, +the size of each item, and the type of the items. The \kw{PUP::er} +is allowed to use these fields in any way. However, an isSizing +or isPacking PUP::er may not modify the referenced user data; +while an isUnpacking PUP::er may not read the original values of +the user data. If your PUP::er is not clearly packing (saving values +to some format) or unpacking (restoring values), declare it as +sizing \kw{PUP::er}. + index bab756e70b3b48c482ed47674ae21f2eb9509150..cf3479af3ef7c0b13876c50712597441866ef8ae 100644 (file) @@ -1,5 +1,4 @@ -\subsection{All-to-All} - +\section{All-to-All} All-to-All is a frequently encountered pattern of communication in parallel programs where each processing element sends a message to every other processing element. Variations on this pattern are also @@ -13,7 +12,7 @@ Note that we are currently extending support for All-to-All communication in Charm++ and so the API may change in the future. -\subsubsection{MeshStreamer} +\subsection{MeshStreamer} MeshStreamer optimizes the case of All-to-All and Many-to-Many communication on regular 2D and 3D machine topologies. Messages sent index e57fe181988e09fc33ff23e7287f1c1adf6a000c..9f97da63ec5477d9eca85f2b6be959b332d55039 100644 (file) @@ -1,25 +1,27 @@ -\subsection{Basic Arrays} - +%\subsection{Chare Arrays} \label{basic arrays} -Arrays \index{arrays} are arbitrarily-sized collections of chares. The -entire array has a globally unique identifier of type \kw{CkArrayID}, and -each element has a unique index of type \kw{CkArrayIndex}. A \kw{CkArrayIndex} -can be a single integer (i.e. 1D array), several integers (i.e. a -multidimensional array), or an arbitrary string of bytes (e.g. a binary tree -index). +Chare arrays\index{chare array}\index{chare arrays}\index{arrays} are +arbitrarily-sized, possibly-sparse collections of chares that are distributed +across the processors. The entire array has a globally unique identifier of +type \kw{CkArrayID}, and each element has a unique index of type +\kw{CkArrayIndex}. A \kw{CkArrayIndex} can be a single integer (i.e. a one-dimensional array), +several integers (i.e. a multi-dimensional array), or an arbitrary string of +bytes (e.g. a binary tree index). -Array elements can be dynamically created and destroyed on any processor, -and messages for the elements will still arrive properly. -Array elements can be migrated at any time, allowing arrays to be efficiently -load balanced. Array elements can also receive array broadcasts and -contribute to array reductions. +Array elements can be dynamically created and destroyed on any PE, +migrated between PEs, and messages for the elements will still arrive +properly. Array elements can be migrated at any time, allowing arrays to be +efficiently load balanced. A chare array (or a subset of array elements) can +receive a broadcast/multicast or contribute to a reduction. -\subsubsection{Declaring a 1D Array} +An example program can be found here: \examplerefdir{array}. -You can declare a one-dimensional \index{array}\index{chare array}chare array -as: +\section{Declaring a One-dimensional Array} +You can declare a one-dimensional (1D) \index{array}\index{chare array}chare +array as: +% \begin{alltt} //In the .ci file: array [1D] A \{ @@ -27,16 +29,23 @@ array [1D] A \{ entry void someEntry(\uw{parameters2}); \}; \end{alltt} - -Just as every Chare inherits from the system class \kw{CBase}\_\uw{ClassName}, every -array element inherits from the system class \kw{CBase}\_\uw{ClassName}. -Just as a Chare inherits thishandle'', each -array element inherits thisArrayID'', the \kw{CkArrayID} of its array, -and thisIndex'', the element's array index. -As well as chares are allowed to inherit directly from class \kw{Chare}, -array elements are allowed to inherit from \kw{ArrayElement1D} if 1D array, -\kw{ArrayElement2D} if 2D array, and so on up to 6D. - +% +Array elements extend the system class \kw{CBase}\_\uw{ClassName}, inheriting +several fields: +% +\begin{itemize} +\item \kw{thisProxy}: the proxy to the entire chare array that can be indexed + to obtain a proxy to a specific array element (e.g. for a 1D chare array + \kw{thisProxy[10]}; for a 2D chare array \kw{thisProxy(10, 20)}) +\item \kw{thisArrayID}: the array's globally unique identifier +\item \kw{thisIndex}: the element's array index (an array element can obtain a + proxy to itself like this \kw{thisProxy[thisIndex]}) +\end{itemize} +% +\zap{As well as chares are allowed to inherit directly from class \kw{Chare}, + array elements are allowed to inherit from \kw{ArrayElement1D} if 1D array, + \kw{ArrayElement2D} if 2D array, and so on up to 6D.} +% \begin{alltt} class A : public CBase\_A \{ public: @@ -46,1042 +55,392 @@ class A : public CBase\_A \{ void someEntry(\uw{parameters2}); \}; \end{alltt} - -Note \uw{A}'s odd migration constructor, which is normally empty: - +% +Note that \uw{A} must have a \emph{migration constructor}, which is typically +empty: +% \begin{alltt} //In the .C file: A::A(void) \{ - //...your constructor code... + //... constructor code ... \} -A::A(CkMigrateMessage *m) \{ \} -\end{alltt} -Read the section Migratable Array Elements'' for more -information on the \kw{CkMigrateMessage} constructor. +A::A(CkMigrateMessage *m) \{ /* the migration constructor */ \} +A::someEntry(\uw{parameters2}) +\{ + // ... code for someEntry ... +\} +\end{alltt} +% +See the section~\ref{arraymigratable} on migratable array elements for more +information on the migration constructor that takes \kw{CkMigrateMessage *} as +the argument. -\subsubsection{Creating a Simple Array} +\section{Declaring Multi-dimensional Arrays} -\label{basic array creation} +\charmpp{} supports multi-dimensional or user-defined indices. These array types +can be declared as: +% +\begin{alltt} +//In the .ci file: +array [1D] ArrayA \{ entry ArrayA(); entry void e(\uw{parameters});\} +array [2D] ArrayB \{ entry ArrayB(); entry void e(\uw{parameters});\} +array [3D] ArrayC \{ entry ArrayC(); entry void e(\uw{parameters});\} +array [4D] ArrayD \{ entry ArrayD(); entry void e(\uw{parameters});\} +array [5D] ArrayE \{ entry ArrayE(); entry void e(\uw{parameters});\} +array [6D] ArrayF \{ entry ArrayF(); entry void e(\uw{parameters});\} +array [Foo] ArrayG \{ entry ArrayG(); entry void e(\uw{parameters});\} +\end{alltt} +% +The last declaration expects an array index of type \kw{CkArrayIndex}\uw{Foo}, +which must be defined before including the \texttt{.decl.h} file (see +section~\ref{user-defined array index type} on user-defined array indices for +more information). +% +\begin{alltt} +//In the .h file: +class ArrayA : public CBase\_ArrayA \{ public: ArrayA()\{\} ...\}; +class ArrayB : public CBase\_ArrayB \{ public: ArrayB()\{\} ...\}; +class ArrayC : public CBase\_ArrayC \{ public: ArrayC()\{\} ...\}; +class ArrayD : public CBase\_ArrayD \{ public: ArrayD()\{\} ...\}; +class ArrayE : public CBase\_ArrayE \{ public: ArrayE()\{\} ...\}; +class ArrayF : public CBase\_ArrayF \{ public: ArrayF()\{\} ...\}; +class ArrayG : public CBase\_ArrayG \{ public: ArrayG()\{\} ...\}; +\end{alltt} +% +The fields in \kw{thisIndex} are different depending on the dimensionality of +the chare array: +% +\begin{itemize} +\item 1D array: \kw{thisIndex} +\item 2D array ($x$,$y$): \kw{thisIndex.x}, \kw{thisIndex.y} +\item 3D array ($x$,$y$,$z$): \kw{thisIndex.x}, \kw{thisIndex.y}, + \kw{thisIndex.z} +\item 4D array ($w$,$x$,$y$,$z$): \kw{thisIndex.w}, \kw{thisIndex.x}, + \kw{thisIndex.y}, \kw{thisIndex.z} +\item 5D array ($v$,$w$,$x$,$y$,$z$): \kw{thisIndex.v}, \kw{thisIndex.w}, + \kw{thisIndex.x}, \kw{thisIndex.y}, \kw{thisIndex.z} +\item 6D array ($x_1$,$y_1$,$z_1$,$x_2$,$y_2$,$z_2$): \kw{thisIndex.x1}, + \kw{thisIndex.y1}, \kw{thisIndex.z1}, \kw{thisIndex.x2}, \kw{thisIndex.y2}, + \kw{thisIndex.z2} +\item Foo array: \kw{thisIndex} +\end{itemize} -You always create an array using the \kw{CProxy\_Array::ckNew} -routine. This returns a proxy object, which can -be kept, copied, or sent in messages. -To create a 1D \index{array}array containing elements indexed -(0, 1, ..., \uw{num\_elements}-1), use: +\section{Creating an Array} +\label{basic array creation} +An array is created using the \kw{CProxy\_Array::ckNew} routine. This returns a +proxy object, which can be kept, copied, or sent in messages. The following +creates a 1D \index{array}array containing elements indexed (0, 1, \ldots, +\uw{dimX}-1): +% \begin{alltt} -CProxy_A1 a1 = CProxy_A1::ckNew(\uw{parameters},num_elements); +CProxy_ArrayA a1 = CProxy_ArrayA::ckNew(\uw{parameters}, dimX); \end{alltt} +% +Likewise, a dense multidimensional array can be created by passing the extents +at creation time to \kw{ckNew}. +% +\begin{alltt} +CProxy_ArrayB a2 = CProxy_ArrayB::ckNew(\uw{parameters}, dimX, dimY); +CProxy_ArrayC a3 = CProxy_ArrayC::ckNew(\uw{parameters}, dimX, dimY, dimZ); +\end{alltt} +% +For 4D, 5D, 6D and user-defined arrays, this functionality cannot be used. The +array elements must be inserted individually as described in +section~\ref{dynamic_insertion}. -The constructor is invoked on each array element. -For creating higher-dimensional arrays, or for more options -when creating the array, see section~\ref{advanced array create}. - - -\subsubsection{Messages} +During creation, the constructor is invoked on each array element. For more +options when creating the array, see section~\ref{advanced array create}. -An array proxy responds to the appropriate index call-- -for 1D arrays, use [i] or (i); for 2D use (x,y); for 3D -use (x,y,z); and for user-defined types use [f] or (f). +\section{Entry Method Invocation} -To send a \index{Array message} message to an array element, index the proxy +To obtain a proxy to a specific element in chare array, the chare array proxy +(e.g. \kw{thisProxy}) must be indexed by the appropriate index call depending +on the dimentionality of the array: +% +\begin{itemize} +\item 1D array, to obtain a proxy to element$i$: \kw{thisIndex[$i$]} or + \kw{thisIndex($i$)} +\item 2D array, to obtain a proxy to element$(i,j)$: \kw{thisIndex($i$,$j$)} +\item 3D array, to obtain a proxy to element$(i,j,k)$: \kw{thisIndex($i$,$j$,$k$)} +\item 4D array, to obtain a proxy to element$(i,j,k,l)$: + \kw{thisIndex($i$,$j$,$k$,$l$)} +\item 5D array, to obtain a proxy to element$(i,j,k,l,m)$: + \kw{thisIndex($i$,$j$,$k$,$l$,$m$)} +\item 6D array, to obtain a proxy to element$(i,j,k,l,m,n)$: + \kw{thisIndex($i$,$j$,$k$,$l$,$m$,$n$)} +\item User-defined array, to obtain a proxy to element$i$: \kw{thisIndex[$i$]} + or \kw{thisIndex($i$)} +\end{itemize} +% +To send a \index{Array message} message to an array element, index the proxy and call the method name: - +% \begin{alltt} a1[i].doSomething(\uw{parameters}); a3(x,y,z).doAnother(\uw{parameters}); aF[CkArrayIndexFoo(...)].doAgain(\uw{parameters}); \end{alltt} -You may invoke methods on array elements that have not yet -been created-- by default, the system will buffer the message until the -element is created\footnote{However, the element must eventually be -created-- i.e., within a 3-minute buffering period.}. - -Messages are not guarenteed to be delivered in order. -For example, if I invoke a method A, then method B; -it is possible for B to be executed before A. +You may invoke methods on array elements that have not yet been created. The +\charmpp{} runtime system will buffer the message until the element is +created. +%\footnote{However, the element must eventually be created (i.e. within +%a 3-minute buffering period).} +\footnote{However, the element must eventually be created.} +Messages are not guarenteed to be delivered in order. For instance, if a method +is invoked on method \kw{A} and then method \kw{B}; it is possible that \kw{B} +is executed before \kw{A}. +% \begin{alltt} a1[i].A(); a1[i].B(); \end{alltt} -Messages sent to migrating elements will be delivered after -the migrating element arrives. It is an error to send -a message to a deleted array element. +Messages sent to migrating elements will be delivered after the migrating +element arrives on the destination PE. It is an error to send a message +to a deleted array element. +\section{Broadcasts on Chare Arrays} -\subsubsection{Broadcasts} - -To \index{Array broadcast} broadcast a message to all the current elements of an array, -simply omit the index, as: - +To \index{array broadcast} broadcast a message to all the current elements of +an array, simply omit the index (invoke an entry method on the chare array +proxy): +% \begin{alltt} a1.doIt(\uw{parameters}); //<- invokes doIt on each array element \end{alltt} +% +The broadcast message will be delivered to every existing array element exactly +once. Broadcasts work properly even with ongoing migrations, insertions, and +deletions. -The broadcast message will be delivered to every existing array -element exactly once. Broadcasts work properly even with ongoing -migrations, insertions, and deletions. - - -\subsubsection{Reductions on Chare Arrays} +\section{Reductions on Chare Arrays} +\label{reductions} A \index{array reduction}reduction applies a single operation (e.g. add, max, min, ...) to data items scattered across many processors and -collects the result in one place. \charmpp{} supports reductions on the -elements of a Chare array. - -The data to be reduced comes from each array element, -which must call the \kw{contribute} method: - -\begin{alltt} -ArrayElement::contribute(int nBytes,const void *data,CkReduction::reducerType type); -\end{alltt} - -Reductions are described in more detail in Section~\ref{reductions}. - - -\subsubsection{Destroying Arrays} - -To destroy an array element-- detach it from the array, -call its destructor, and release its memory--invoke its -\kw{Array destroy} method, as: - -\begin{alltt} -a1[i].ckDestroy(); -\end{alltt} - -You must ensure that no messages are sent to a deleted element. -After destroying an element, you may insert a new element at -its index. - - - - -\subsection{Advanced Arrays} - -\label{advanced arrays} - -The basic array features described above (creation, messaging, -broadcasts, and reductions) are needed in almost every -\charmpp{} program. The more advanced techniques that follow -are not universally needed; but are still often useful. - - -\subsubsection{Declaring Multidimensional, or User-defined Index Arrays} - -\charmpp{} contains direct support for multidimensional and -even user-defined index arrays. These arrays can be declared as: - -\begin{alltt} -//In the .ci file: -message MyMsg; -array [1D] A1 \{ entry A1(); entry void e(\uw{parameters});\} -array [2D] A2 \{ entry A2(); entry void e(\uw{parameters});\} -array [3D] A3 \{ entry A3(); entry void e(\uw{parameters});\} -array [4D] A4 \{ entry A4(); entry void e(\uw{parameters});\} -array [5D] A5 \{ entry A5(); entry void e(\uw{parameters});\} -array [6D] A6 \{ entry A6(); entry void e(\uw{parameters});\} -array [Foo] AF \{ entry AF(); entry void e(\uw{parameters});\} -\end{alltt} - -The last declaration expects an array index of type \kw{CkArrayIndex}\uw{Foo}, -which must be defined before including the \texttt{.decl.h} file -(see User-defined array index type'' below). +collects the result in one place. \charmpp{} supports reductions +over the members of an array or group. +The data to be reduced comes from a call to the member \kw{contribute} +method: \begin{alltt} -//In the .h file: -class A1 : public CBase\_A1 \{ public: A1()\{\} ...\}; -class A2 : public CBase\_A2 \{ public: A2()\{\} ...\}; -class A3 : public CBase\_A3 \{ public: A3()\{\} ...\}; -class A4 : public CBase\_A4 \{ public: A4()\{\} ...\}; -class A5 : public CBase\_A5 \{ public: A5()\{\} ...\}; -class A6 : public CBase\_A6 \{ public: A6()\{\} ...\}; -class AF : public CBase\_AF \{ public: AF()\{\} ...\}; -\end{alltt} - -A 1D array element can access its index via its inherited thisIndex'' -field; a 2D via thisIndex.x'' and thisIndex.y'', and a 3D via -thisIndex.x'', thisIndex.y'', and thisIndex.z''. The subfields -of 4D, 5D, and 6D are respectively \{w,x,y,z\}, \{v,w,x,y,z\}, and -\{x1,y1,z1,x2,y2,z2\}. -A user-defined index array can access its index as thisIndex''. - - -Likewise, you can create a dense multidimensional array by passing the -extents at creation time to \kw{ckNew}. - -\begin{alltt} -CProxy_A1 a1 = CProxy_A1::ckNew(parameters, num_elements); -CProxy_A2 a2 = CProxy_A2::ckNew(parameters, num_rows, num_colums); -CProxy_A3 a3 = CProxy_A3::ckNew(parameters, num_rows, num_columns, num_depth); +void contribute(int nBytes, const void *data, CkReduction::reducerType type); \end{alltt} -For 4D, 5D, 6D and user-defined arrays, this functionality cannot be used. -You need to insert the array elements individually (Section~\ref{dynamic_insertion}). - -\subsubsection{Advanced Array Creation} - -\label{advanced array create} -There are several ways to control the array creation process. -You can adjust the map and bindings before creation, change -the way the initial array elements are created, create elements -explicitly during the computation, and create elements implicitly, -on demand''. - -You can create all your elements using any one of these methods, -or create different elements using different methods. -An array element has the same syntax and semantics no matter -how it was created. - - -\subsubsection{Advanced Array Creation: CkArrayOptions} - -\index{CkArrayOptions} -\label{CkArrayOptions} - -The array creation method \kw{ckNew} actually takes a parameter -of type \kw{CkArrayOptions}. This object describes several -optional attributes of the new array. - -The most common form of \kw{CkArrayOptions} is to set the number -of initial array elements. A \kw{CkArrayOptions} object will be -constructed automatically in this special common case. Thus -the following code segments all do exactly the same thing: - -\begin{alltt} -//Implicit CkArrayOptions - a1=CProxy_A1::ckNew(\uw{parameters},nElements); - -//Explicit CkArrayOptions - a1=CProxy_A1::ckNew(\uw{parameters},CkArrayOptions(nElements)); +This call contributes \kw{nBytes} bytes starting at \kw{data} to the +reduction \kw{type} (see Section~\ref{builtin_reduction}). Unlike sending a +message, you may use \kw{data} after the call to \kw{contribute}. All +members of the chare array or group must call \kw{contribute}, +and all of them must use the same reduction type. -//Separate CkArrayOptions - CkArrayOptions opts(nElements); - a1=CProxy_A1::ckNew(\uw{parameters},opts); -\end{alltt} -Note that the numElements'' in an array element is simply the -numElements passed in when the array was created. The true number of -array elements may grow or shrink during the course of the -computation, so numElements can become out of date. This bulk'' -constructor approach should be preferred where possible, especially -for large arrays. Bulk construction is handled via a broadcast which -will be significantly more efficient in the number of messages -required than inserting each element individually which will require -one message send per element. - -\kw{CkArrayOptions} contains a few flags that the runtime can use to -optimize handling of a given array. If the array elements will only -migrate at controlled points (such as periodic load balancing with -{\tt AtASync()}), this is signalled to the runtime by calling {\tt - opts.setAnytimeMigration(false)}\footnote{At present, this optimizes -broadcasts to not save old messages for immigrating chares.}. If all -array elements will be inserted by bulk creation or by {\tt - fooArray[x].insert()} calls, signal this by calling {\tt - opts.setStaticInsertion(true)} \footnote{This can enable a slightly - faster default mapping scheme.}. - -\subsubsection{Advanced Array Creation: Map Object} - -\index{array map} -\label{array map} - -You can use \kw{CkArrayOptions} to specify a map object'' -for an array. The map object is used by the array manager -to determine the home'' processor of each element. The -home processor is the processor responsible for maintaining -the location of the element. - -There is a default map object, which maps 1D array indices -in a block fashion to processors, and maps other array -indices based on a hash function. Some other mappings such as round-robin -(\kw{RRMap}) also exist, which can be used -similar to custom ones described below. - -A custom map object is implemented as a group which inherits from -\kw{CkArrayMap} and defines these virtual methods: +For example, if we want to sum each array/group member's single integer myInt, +we would use: \begin{alltt} -class CkArrayMap : public Group -\{ -public: - //... - - //Return an arrayHdl'', given some information about the array - virtual int registerArray(CkArrayIndex& numElements,CkArrayID aid); - //Return the home processor number for this element of this array - virtual int procNum(int arrayHdl,const CkArrayIndex &element); -\} + // Inside any member method + int myInt=get_myInt(); + contribute(sizeof(int),\&myInt,CkReduction::sum_int); \end{alltt} -For example, a simple 1D blockmapping scheme. Actual mapping is -handled in the procNum function. - -\begin{alltt} -class BlockMap : public CkArrayMap -\{ - public: - BlockMap(void) \{\} - BlockMap(CkMigrateMessage *m)\{\} - int registerArray(CkArrayIndex& numElements,CkArrayID aid) \{ - return 0; - \} - int procNum(int /*arrayHdl*/,const CkArrayIndex &idx) \{ - int elem=*(int *)idx.data(); - int penum = (elem/(32/CkNumPes())); - return penum; - \} -\}; - -\end{alltt} -Once you've instantiated a custom map object, you can use it to -control the location of a new array's elements using the -\kw{setMap} method of the \kw{CkArrayOptions} object described above. -For example, if you've declared a map object named BlockMap'': +The built-in reduction types (see below) can also handle arrays of +numbers. For example, if each element of a chare array has a pair of +doubles \uw{forces}[2], the corresponding elements of which are to be added across +all elements, from each element call: \begin{alltt} -//Create the map group - CProxy_BlockMap myMap=CProxy_BlockMap::ckNew(); -//Make a new array using that map - CkArrayOptions opts(nElements); - opts.setMap(myMap); - a1=CProxy_A1::ckNew(\uw{parameters},opts); + double forces[2]=get_my_forces(); + contribute(2*sizeof(double),forces,CkReduction::sum_double); \end{alltt} +This will result in a {\tt double} array of 2 elements, the first of which +contains the sum of all \uw{forces}[0] values, with the second element +holding the sum of all \uw{forces}[1] values of the chare array elements. +Note that since C++ arrays (like \uw{forces}[2]) are already pointers, we +don't use \&\uw{forces}. -\subsubsection{Advanced Array Creation: Initial Elements} - -\index{array initial} -\label{array initial} - -The map object described above can also be used to create -the initial set of array elements in a distributed fashion. -An array's initial elements are created by its map object, -by making a call to \kw{populateInitial} on each processor. -You can create your own set of elements by creating your -own map object and overriding this virtual function of \kw{CkArrayMap}: +Typically the client entry method of a reduction takes a single argument of +type CkReductionMsg (see Section~\ref{reductionClients}). However, by giving an entry method the +\kw{reductiontarget} attribute in the {\tt .ci} file, you can instead use entry methods that take +arguments of the same type as specified by the {\em contribute} call. +When creating a callback to the +reduction target, the entry method index is generated by +{\tt CkReductionTarget(ChareClass, method\_name)} +instead of {\tt CkIndex\_ChareClass::method\_name(...)}. +For example, +the code for a typed reduction that yields an {\tt int}, would look like this: \begin{alltt} - virtual void populateInitial(int arrayHdl,int numInitial, - void *msg,CkArrMgr *mgr) -\end{alltt} + // In the .ci file... + entry [reductiontarget] void done(int result); -In this call, \kw{arrayHdl} is the value returned by \kw{registerArray}, -\kw{numInitial} is the number of elements passed to \kw{CkArrayOptions}, -\kw{msg} is the constructor message to pass, and \kw{mgr} is the -array to create. + // In some .cc file: + // Create a callback that invokes the typed reduction client + // driverProxy is a proxy to the chare object on which + // the reduction target method {\em done} is called upon completion + // of the reduction + CkCallback cb(CkReductionTarget(Driver, done), driverProxy); -\kw{populateInitial} creates new array elements using the method -\kw{void CkArrMgr::insertInitial(CkArrayIndex idx,void *ctorMsg)}. -For example, to create one row of 2D array elements on each processor, -you would write: + // Contribution to the reduction... + contribute(sizeof(int), &intData, CkReduction::sum_int, cb); -\begin{alltt} -void xyElementMap::populateInitial(int arrayHdl,int numInitial, - void *msg,CkArrMgr *mgr) -\{ - if (numInitial==0) return; //No initial elements requested - - //Create each local element - int y=CkMyPe(); - for (int x=0;x<numInitial;x++) \{ - mgr->insertInitial(CkArrayIndex2D(x,y),CkCopyMsg(&msg)); + // Definition of the reduction client... + void Driver::done(int result) + \{ + CkPrintf("Reduction value: \%d", result); \} - mgr->doneInserting(); - CkFreeMsg(msg); -\} \end{alltt} -Thus calling \kw{ckNew(10)} on a 3-processor machine would result in -30 elements being created. - - -\subsubsection{Advanced Array Creation: Bound Arrays} +This will also work for arrays of data +elements({\tt entry [reductiontarget] void done(int n, int result[n])}), +and for any user-defined type with a PUP method +(see ~\ref{sec:pup}). If you know that the reduction will yield a particular +number of elements, say 3 {\tt int}s, you can also specify a reduction target which +takes 3 {\tt int}s and it will be invoked correctly. -\experimental{} -\index{bound arrays} \index{bindTo} -\label{bound arrays} -You can bind'' a new array to an existing array -using the \kw{bindTo} method of \kw{CkArrayOptions}. Bound arrays -act like separate arrays in all ways except for migration-- -corresponding elements of bound arrays always migrate together. -For example, this code creates two arrays A and B which are -bound together-- A[i] and B[i] will always be on the same processor. +Reductions do not have to specify commutative-associative operations on data; +they can also be used to signal the fact that all array/group members +have reached a certain synchronization point. In this case, a simpler version +of contribute may be used: -\begin{alltt} -//Create the first array normally - aProxy=CProxy_A::ckNew(\uw{parameters},nElements); -//Create the second array bound to the first - CkArrayOptions opts(nElements); - opts.bindTo(aProxy); - bProxy=CProxy_B::ckNew(\uw{parameters},opts); -\end{alltt} - -An arbitrary number of arrays can be bound together-- -in the example above, we could create yet another array -C and bind it to A or B. The result would be the same -in either case-- A[i], B[i], and C[i] will always be -on the same processor. - -There is no relationship between the types of bound arrays-- -it is permissible to bind arrays of different types or of the -same type. It is also permissible to have different numbers -of elements in the arrays, although elements of A which have -no corresponding element in B obey no special semantics. -Any method may be used to create the elements of any bound -array. - -Bound arrays are often useful if A[i] and B[i] perform different -aspects of the same computation, and thus will run most efficiently -if they lie on the same processor. Bound array elements are guaranteed -to always be able to interact using \kw{ckLocal} (see -section~\ref{ckLocal for arrays}), although the local pointer must -be refreshed after any migration. This should be done during the \kw{pup} -routine. When migrated, all elements that are bound together will be created -at the new processor before \kw{pup} is called on any of them, ensuring that -a valid local pointer to any of the bound objects can be obtained during the -\kw{pup} routine of any of the others. - -For example, an array {\it Alibrary} is implemented as a library module. -It implements a certain functionality by operating on a data array {\it dest} -which is just a pointer to some user provided data. -A user defined array {\it UserArray} is created and bound to -the array {\it Alibrary} to take advanatage of the functionality provided -by the library. -When bound array element migrated, the {\it data} pointer in {\it UserArray} -is re-allocated in {\it pup()}, thus {\it UserArray} is responsible to refresh -the pointer {\it dest} stored in {\it Alibrary}. +%Sometimes it is not important the data to be reduced, but only the fact that all +%elements have reached a synchronization point. In this case a simpler version of +%contribute can be used: \begin{alltt} -class Alibrary: public CProxy_Alibrary \{ -public: - ... - void set_ptr(double *ptr) \{ dest = ptr; \} - virtual void pup(PUP::er &p); -private: - double *dest; // point to user data in user defined bound array -\}; - -class UserArray: public CProxy_UserArray \{ -public: - virtual void pup(PUP::er &p) \{ - p|len; - if(p.isUnpacking()) \{ - data = new double[len]; - Alibrary *myfellow = AlibraryProxy(thisIndex).ckLocal(); - myfellow->set_ptr(data); // refresh data in bound array - \} - p(data, len); - \} -private: - CProxy_Alibrary AlibraryProxy; // proxy to my bound array - double *data; // user allocated data pointer - int len; -\}; + contribute(); \end{alltt} +In all cases, the result of the reduction operation is passed to the {\em reduction +client}. Many different kinds of reduction clients can be used, as +explained in Section~\ref{reductionClients}. -\subsubsection{Advanced Array Creation: Dynamic Insertion} - -\label{dynamic_insertion} - -In addition to creating initial array elements using ckNew, -you can also -create array elements during the computation. - -You insert elements into the array by indexing the proxy -and calling insert. The insert call optionally takes -parameters, which are passed to the constructor; and a -processor number, where the element will be created. -Array elements can be inserted in any order from -any processor at any time. Array elements need not -be contiguous. +Please refer to \examplerefdir{typed\_reduction} for a working example of +reductions in Charm++. -If using \kw{insert} to create all the elements of the array, -you must call \kw{CProxy\_Array::doneInserting} before using -the array. +Note that the reduction will complete properly even if chare array elements are {\em migrated} +or {\em deleted} during the reduction. Additionally, when you create a new chare array element, +it is expected to contribute to the next reduction not already in progress on that +processor. -\begin{alltt} -//In the .C file: -int x,y,z; -CProxy_A1 a1=CProxy_A1::ckNew(); //Creates a new, empty 1D array -for (x=...) \{ - a1[x ].insert(\uw{parameters}); //Bracket syntax - a1(x+1).insert(\uw{parameters}); // or equivalent parenthesis syntax -\} -a1.doneInserting(); - -CProxy_A2 a2=CProxy_A2::ckNew(); //Creates 2D array -for (x=...) for (y=...) - a2(x,y).insert(\uw{parameters}); //Can't use brackets! -a2.doneInserting(); - -CProxy_A3 a3=CProxy_A3::ckNew(); //Creates 3D array -for (x=...) for (y=...) for (z=...) - a3(x,y,z).insert(\uw{parameters}); -a3.doneInserting(); - -CProxy_AF aF=CProxy_AF::ckNew(); //Creates user-defined index array -for (...) \{ - aF[CkArrayIndexFoo(...)].insert(\uw{parameters}); //Use brackets... - aF(CkArrayIndexFoo(...)).insert(\uw{parameters}); // ...or parenthesis -\} -aF.doneInserting(); +\subsection{Built-in Reduction Types} +\label{builtin_reduction} -\end{alltt} +\charmpp{} includes several built-in reduction types, used to combine +individual contributions. Any of them may be passed as an argument of type +\kw{CkReduction::reducerType} to \kw{contribute}. -The \kw{doneInserting} call starts the reduction manager (see Array -Reductions'') and load balancer (see ~\ref{lbFramework})-- since -these objects need to know about all the array's elements, they -must be started after the initial elements are inserted. -You may call \kw{doneInserting} multiple times, but only the first -call actually does anything. You may even \kw{insert} or \kw{destroy} -elements after a call to \kw{doneInserting}, with different semantics-- -see the reduction manager and load balancer sections for details. +The first four operations ({\tt sum}, {\tt product}, {\tt max}, and {\tt min}) work on {\tt int}, +{\tt float}, or {\tt double} data as indicated by the suffix. The logical +reductions ({\tt and}, {\tt or}) only work on integer data. All the built-in +reductions work on either single numbers (pass a pointer) or arrays-- just +pass the correct number of bytes to \kw{contribute}. -If you do not specify one, the system will choose a procesor to -create an array element on based on the current map object. +\begin{enumerate} +\item \kw{CkReduction::nop}-- no operation performed. +\item \kw{CkReduction::sum\_int}, \kw{sum\_float}, \kw{sum\_double}-- the +result will be the sum of the given numbers. -\subsubsection{Advanced Array Creation: Demand Creation} +\item \kw{CkReduction::product\_int}, \kw{product\_float}, +\kw{product\_double}-- the result will be the product of the given numbers. -Normally, invoking an entry method on a nonexistant array -element is an error. But if you add the attribute -\index{createhere} \index{createhome} -\kw{[createhere]} or \kw{[createhome]} to an entry method, - the array manager will -demand create'' a new element to handle the message. +\item \kw{CkReduction::max\_int}, \kw{max\_float}, \kw{max\_double}-- the +result will be the largest of the given numbers. -With \kw{[createhome]}, the new element -will be created on the home processor, which is most efficient when messages for -the element may arrive from anywhere in the machine. With \kw{[createhere]}, -the new element is created on the sending processor, which is most efficient -if when messages will often be sent from that same processor. +\item \kw{CkReduction::min\_int}, \kw{min\_float}, \kw{min\_double}-- the +result will be the smallest of the given numbers. -The new element is created by calling its default (taking no -paramters) constructor, which must exist and be listed in the .ci file. -A single array can have a mix of demand-creation and -classic entry methods; and demand-created and normally -created elements. +\item \kw{CkReduction::logical\_and}-- the result will be the logical AND of the given +integers. 0 is false, nonzero is true. +\item \kw{CkReduction::logical\_or}-- the result will be the logical OR of the given +integers. +\item \kw{CkReduction::bitvec\_and}-- the result will be the bitvector AND of the given numbers (represented as integers). -\subsubsection{User-defined array index type} +\item \kw{CkReduction::bitvec\_or}-- the result will be the bitvector OR of the given numbers (represented as integers). -\index{Array index type, user-defined} -\charmpp{} array indices are arbitrary collections of integers. -To define a new array index, you create an ordinary C++ class -which inherits from \kw{CkArrayIndex} and sets the nInts'' member -to the length, in integers, of the array index. +\item \kw{CkReduction::set}-- the result will be a verbatim concatenation of +all the contributed data, separated into \kw{CkReduction::setElement} records. +The data contributed can be of any length, and can vary across array elements +or reductions. To extract the data from each element, see the description +below. -For example, if you have a structure or class named Foo'', you -can use a \uw{Foo} object as an array index by defining the class: +\item \kw{CkReduction::concat}-- the result will be a byte-by-byte +concatentation of all the contributed data. The contributed elements +are not delimiter-separated. -\begin{alltt} -#include <charm++.h> -class CkArrayIndexFoo:public CkArrayIndex \{ - Foo f; -public: - CkArrayIndexFoo(const Foo \&in) - \{ - f=in; - nInts=sizeof(f)/sizeof(int); - \} - //Not required, but convenient: cast-to-foo operators - operator Foo &() \{return f;\} - operator const Foo &() const \{return f;\} -\}; -\end{alltt} +\end{enumerate} -Note that \uw{Foo}'s size must be an integral number of integers-- -you must pad it with zero bytes if this is not the case. -Also, \uw{Foo} must be a simple class-- it cannot contain -pointers, have virtual functions, or require a destructor. -Finally, there is a \charmpp\ configuration-time option called -CK\_ARRAYINDEX\_MAXLEN \index{CK\_ARRAYINDEX\_MAXLEN} -which is the largest allowable number of -integers in an array index. The default is 3; but you may -override this to any value by passing -DCK\_ARRAYINDEX\_MAXLEN=n'' -to the \charmpp\ build script as well as all user code. Larger -values will increase the size of each message. -You can then declare an array indexed by \uw{Foo} objects with +\kw{CkReduction::set} returns a collection of \kw{CkReduction::setElement} +objects, one per contribution. This class has the definition: \begin{alltt} -//in the .ci file: -array [Foo] AF \{ entry AF(); ... \} - -//in the .h file: -class AF : public CBase\_AF -\{ public: AF() \{\} ... \} - -//in the .C file: - Foo f; - CProxy_AF a=CProxy_AF::ckNew(); - a[CkArrayIndexFoo(f)].insert(); - ... -\end{alltt} - -Note that since our CkArrayIndexFoo constructor is not declared -with the explicit keyword, we can equivalently write the last line as: - -\begin{alltt} - a[f].insert(); -\end{alltt} - -When you implement your array element class, as shown above you -can inherit from \kw{CBase}\_\uw{ClassName}, -a class templated by the index type \uw{Foo}. In the old syntax, -you could also inherit directly from \kw{ArrayElementT}. -The array index (an object of type \uw{Foo}) is then accessible as -thisIndex''. For example: - -\begin{alltt} - -//in the .C file: -AF::AF() -\{ - Foo myF=thisIndex; - functionTakingFoo(myF); -\} -\end{alltt} - - -\subsubsection{Migratable Array Elements} - -\label{arraymigratable} -Array objects can \index{migrate}migrate from one PE to another. -For example, the load balancer (see section~\ref{lbFramework}) -might migrate array elements to better balance the load between -processors. For an array element to migrate, it must implement -a pack/unpack or pup'' method: - -\begin{alltt} -//In the .h file: -class A2 : public CBase\_A2 \{ -private: //My data members: - int nt; - unsigned char chr; - float flt[7]; - int numDbl; - double *dbl; -public: - //...other declarations - - virtual void pup(PUP::er \&p); -\}; - -//In the .C file: -void A2::pup(PUP::er \&p) -\{ - CBase\_A2::pup(p); //<- MUST call superclass's pup routine - p|nt; - p|chr; - p(flt,7); - p|numDbl; - if (p.isUnpacking()) dbl=new double[numDbl]; - p(dbl,numDbl); -\} -\end{alltt} - -Please note that if your object contains Structured Dagger code (see section Structured Dagger'') you must use the following syntax to correctly pup the object: - -\begin{alltt} -class bar: public CBase\_bar \{ - private: - int a,b; - public: - bar_SDAG_CODE - ...other methods... - - virtual void pup(PUP::er& p) \{ - __sdag_pup(p); - ...pup other data here... - \} -\}; -\end{alltt} - -See the \index{PUP} section PUP'' for more details on pup routines -and the \kw{PUP::er} type. - -The system uses one pup routine to do both packing and unpacking by -passing different types of \kw{PUP::er}s to it. You can determine -what type of \kw{PUP::er} has been passed to you with the -\kw{isPacking()}, \kw{isUnpacking()}, and \kw{isSizing()} calls. - -An array element can migrate by calling the \kw{migrateMe}(\uw{destination -processor}) member function-- this call must be the last action -in an element entry point. The system can also migrate array elements -for load balancing (see the section~\ref{lbarray}). - -To migrate your array element to another processor, the \charmpp{} -runtime will: - -\begin{itemize} -\item Call your \kw{ckAboutToMigrate} method -\item Call your \uw{pup} method with a sizing \kw{PUP::er} to determine how -big a message it needs to hold your element. -\item Call your \uw{pup} method again with a packing \kw{PUP::er} to pack -your element into a message. -\item Call your element's destructor (killing off the old copy). -\item Send the message (containing your element) across the network. -\item Call your element's migration constructor on the new processor. -\item Call your \uw{pup} method on with an unpacking \kw{PUP::er} to unpack -the element. -\item Call your \kw{ckJustMigrated} method -\end{itemize} - -Migration constructors, then, are normally empty-- all the unpacking -and allocation of the data items is done in the element's \uw{pup} routine. -Deallocation is done in the element destructor as usual. - - -\subsubsection{Load Balancing Chare Arrays} - -see section~\ref{lbFramework} - - -\subsubsection{Local Access} - -\experimental{} -\index{ckLocal for arrays} -\label{ckLocal for arrays} -You can get direct access to a local array element using the -proxy's \kw{ckLocal} method, which returns an ordinary \CC\ pointer -to the element if it exists on the local processor; and NULL if -the element does not exist or is on another processor. - -\begin{alltt} -A1 *a=a1[i].ckLocal(); -if (a==NULL) //...is remote-- send message -else //...is local-- directly use members and methods of a -\end{alltt} - -Note that if the element migrates or is deleted, any pointers -obtained with \kw{ckLocal} are no longer valid. It is best, -then, to either avoid \kw{ckLocal} or else call \kw{ckLocal} -each time the element may have migrated; e.g., at the start -of each entry method. - - -\subsubsection{Array Section} - -\experimental{} -\label{array section} - -\charmpp{} supports array section which is a subset of array -elements in a chare array. \charmpp{} also supports array sections -which are a subset of array elements in multiple chare arrays of the -same type \ref{cross array section}. -A special proxy for an array section can be created given a list of array -indexes of elements. -Multicast operations are directly supported in array section proxy with -an unoptimized direct-sending implementation. -Section reduction is not directly supported by the section proxy. -However, an optimized section multicast/reduction -library called ''CkMulticast'' is provided as a separate library module, -which can be plugged in as a delegation of a section proxy for performing -section-based multicasts and reductions. - -For each chare array "A" declared in a ci file, a section proxy -of type "CProxySection\_A" is automatically generated in the decl and def -header files. -In order to create an array section, a user needs to provide array indexes -of all the array section members. -You can create an array section proxy in your application by -invoking ckNew() function of the CProxySection. -For example, for a 3D array: - -\begin{alltt} - CkVec<CkArrayIndex3D> elems; // add array indices - for (int i=0; i<10; i++) - for (int j=0; j<20; j+=2) - for (int k=0; k<30; k+=2) - elems.push_back(CkArrayIndex3D(i, j, k)); - CProxySection_Hello proxy = CProxySection_Hello::ckNew(helloArrayID, elems.getVec(), elems.size()); -\end{alltt} - -Alternatively, one can do the same thing by providing [lbound:ubound:stride] -for each dimension: - -\begin{alltt} - CProxySection_Hello proxy = CProxySection_Hello::ckNew(helloArrayID, 0, 9, 1, 0, 19, 2, 0, 29, 2); -\end{alltt} - -The above codes create a section proxy that contains array elements of -[0:9, 0:19:2, 0:29:2]. - -For user-defined array index other than CkArrayIndex1D to CkArrayIndex6D, -one needs to use the generic array index type: CkArrayIndex. - -\begin{alltt} - CkArrayIndex *elems; // add array indices - int numElems; - CProxySection_Hello proxy = CProxySection_Hello::ckNew(helloArrayID, elems, numElems); -\end{alltt} - -Once you have the array section proxy, you can do multicast to all the -section members, or send messages to one member using its index that -is local to the section, like these: - -\begin{alltt} - CProxySection_Hello proxy; - proxy.someEntry(...) // multicast - proxy[0].someEntry(...) // send to the first element in the section. -\end{alltt} - -You can move the section proxy in a message to another processor, and still -safely invoke the entry functions to the section proxy. - -In the multicast example above, for a section with k members, total number -of k messages will be sent to all the memebers, which is considered -inefficient when several members are on a same processor, in which -case only one message needs to be sent to that processor and delivered to -all section members on that processor locally. To support this optimization, -a separate library called CkMulticast is provided. This library also supports -section based reduction. - -Note: Use of the bulk array constructor (dimensions given in the CkNew -or CkArrayOptions rather than individual insertion) will allow -construction to race ahead of several other startup procedures, this -creates some limitation on the construction delegation and use of -array section proxies. For safety, array sections should be -created in a post constructor entry method. - - -\label {array_section_multicast} - - -To use the library, you need to compile and install CkMulticast library and -link your applications against the library using -module: - -\begin{alltt} - # compile and install the CkMulticast library, do this only once - cd charm/net-linux/tmp - make multicast - - # link CkMulticast library using -module when compiling application - charmc -o hello hello.o -module CkMulticast -language charm++ -\end{alltt} - -CkMulticast library is implemented using delegation(Sec. ~\ref{delegation}). -A special ''CkMulticastMgr'' Chare Group is created as a -deletegation for section multicast/reduction - all the messages sent -by the section proxy will be passed to the local delegation branch. - -To use the CkMulticast delegation, one needs to create the CkMulticastMgr Group -first, and then setup the delegation relationship between the section proxy and -CkMulticastMgr Group. -One only needs to create one CkMulticastMgr Group globally. -CkMulticastMgr group can serve all multicast/reduction delegations -for different array sections in an application: - -\begin{alltt} - CProxySection_Hello sectProxy = CProxySection_Hello::ckNew(...); - CkGroupID mCastGrpId = CProxy_CkMulticastMgr::ckNew(); - CkMulticastMgr *mCastGrp = CProxy_CkMulticastMgr(mCastGrpId).ckLocalBranch(); - - sectProxy.ckSectionDelegate(mCastGrp); // initialize section proxy - - sectProxy.someEntry(...) //multicast via delegation library as before -\end{alltt} - -By default, CkMulticastMgr group builds a spanning tree for multicast/reduction -with a factor of 2 (binary tree). -One can specify a different factor when creating a CkMulticastMgr group. -For example, - -\begin{alltt} - CkGroupID mCastGrpId = CProxy_CkMulticastMgr::ckNew(3); // factor is 3 -\end{alltt} - -Note, to use CkMulticast library, all multicast messages must inherit from -CkMcastBaseMsg, as the following. -Note that CkMcastBaseMsg must come first, this is IMPORTANT for CkMulticast -library to retrieve section information out of the message. - - -\begin{alltt} -class HiMsg : public CkMcastBaseMsg, public CMessage_HiMsg +class CkReduction::setElement \{ public: - int *data; + int dataSize;//The length of the data array below + char data[];//The (dataSize-long) array of data + CkReduction::setElement *next(void); \}; \end{alltt} -Due to this restriction, you need to define message explicitly for multicast -entry functions and no parameter marshalling can be used for multicast with -CkMulticast library. - -\paragraph{Array Section Reduction} - -Since an array element can be members for multiple array sections, -there has to be a way for each array element to tell for which array -section it wants to contribute. For this purpose, a data structure -called ''CkSectionInfo'' is created by CkMulticastMgr for each -array section that the array element belongs to. -When doing section reduction, the array element needs to pass the -\kw{CkSectionInfo} as a parameter in the \kw{contribute()}. -The \kw{CkSectionInfo} can be retrieved -from a message in a multicast entry function using function call -\kw{CkGetSectionInfo}: +To extract the contribution of each array element from a reduction set, use the +\uw{next} routine repeatedly: \begin{alltt} - CkSectionInfo cookie; - - void SayHi(HiMsg *msg) + //Inside a reduction handler-- + // data is our reduced data from CkReduction_set + CkReduction::setElement *cur=(CkReduction::setElement *)data; + while (cur!=NULL) \{ - CkGetSectionInfo(cookie, msg); // update section cookie every time - int data = thisIndex; - mcastGrp->contribute(sizeof(int), &data, CkReduction::sum_int, cookie); + ... //Use cur->dataSize and cur->data + //Now advance to the next element's contribution + cur=cur->next(); \} \end{alltt} -Note that the cookie cannot be used as a one-time local variable in the -function, the same cookie is needed for the next contribute. This is -because cookie includes some context sensive information for example the -reduction counter. Function \kw{CkGetSectionInfo()} only update some part -of the data in cookie, not creating a brand new one. - -Similar to array reduction, to use section based reduction, a reduction -client CkCallback object need to be created. You may pass the client callback -as an additional parameter to \kw{contribute}. If different contribute calls -pass different callbacks, some (unspecified, unreliable) callback will be -chosen for use. See the followin example: - -\begin{alltt} - CkCallback cb(CkIndex_myArrayType::myReductionEntry(NULL),thisProxy); - mcastGrp->contribute(sizeof(int), &data, CkReduction::sum_int, cookie, cb); -\end{alltt} +The reduction set order is undefined. You should add a source field to the +contributed elements if you need to know which array element gave a particular +contribution. Additionally, if the contributed elements are of a complex +data type, you will likely have to supply code for +%serialize/unserialize operation on your element structure if your +%reduction element data is complex. +serializing/deserializing them. +Consider using the \kw{PUP} +interface see ~\ref{sec:pup} to simplify your object serialization +needs. -If no member passes a callback to contribute, the reduction will use the -default callback. You set the default callback for an array section using the -\kw{setReductionClient} call by the section root member. A -{\bf CkReductionMsg} message will be passed to this callback, which -must delete the message when done. +If the outcome of your reduction is dependent on the order in which +data elements are processed, or if your data is just too +heterogenous to be handled elegantly by the predefined types and you +don't want to undertake multiple reductions, it may be best to define +your own reduction type. See the next section +(Section~\ref{new_type_reduction}) for details. -\begin{alltt} - CProxySection_Hello sectProxy; - CkMulticastMgr *mcastGrp = CProxy_CkMulticastMgr(mCastGrpId).ckLocalBranch(); - mcastGrp->setReductionClient(sectProxy, new CkCallback(...)); -\end{alltt} +\section{Destroying Array Elements} -Same as in array reduction, users can use built-in reduction -types(Section~\ref{builtin_reduction}) or define his/her own reducer functions -(Section~\ref{new_type_reduction}). - -\paragraph{Array section multicast/reduction when migration happens} - -Using multicast/reduction, you don't need to worry about array migrations. -When migration happens, array element in the array section can still use -the \kw{CkSectionInfo} it stored previously for doing reduction. -Reduction messages will be correctly delivered but may not be as efficient -until a new multicast spanning tree is rebuilt internally -in \kw{CkMulticastMgr} library. -When a new spanning tree is rebuilt, a updated \kw{CkSectionInfo} is -passed along with a multicast message, -so it is recommended that -\kw{CkGetSectionInfo()} function is always called when a multicast -message arrives (as shown in the above SayHi example). - -In case when a multicast root migrates, one needs to reconstruct the -spanning tree to get optimal performance. One will get the following -warning message if not doing so: -"Warning: Multicast not optimized after multicast root migrated." -In current implementation, user needs to initiate the rebuilding process -like: - -\begin{alltt} -void Foo::pup(PUP::er & p) { - // if I am multicast root and it is unpacking - if (ismcastroot && p.isUnpacking()) { - CProxySection_Foo fooProxy; // proxy for the section - CkMulticastMgr *mg = CProxy_CkMulticastMgr(mCastGrpId).ckLocalBranch(); - mg->resetSection(fooProxy); - // you may want to reset reduction client to root - CkCallback *cb = new CkCallback(...); - mg->setReductionClient(mcp, cb); - } -} -\end{alltt} - -\paragraph{Cross Array Sections} - - -\experimental{} -\label{cross array section} - -Cross array sections contain elements from multiple arrays. -Construction and use of cross array sections is similar to normal -array sections with the following restrictions. - -\begin{itemize} - -\item Arrays in a section my all be of the same type. - -\item Each array must be enumerated by array ID - -\item The elements within each array must be enumerated explicitly - -\item No existing modules currently support delegation of cross - section proxies. Therefore reductions are not currently supported. - -\end{itemize} - -Note: cross section logic also works for groups with analogous characteristics. - -Given three arrays declared thusly: - -\begin{alltt} - CkArrayID *aidArr= new CkArrayID[3]; - CProxy\_multisectiontest\_array1d *Aproxy= new CProxy\_multisectiontest\_array1d[3]; - for(int i=0;i<3;i++) - \{ - Aproxy[i]=CProxy\_multisectiontest\_array1d::ckNew(masterproxy.ckGetGroupID(),ArraySize); - aidArr[i]=Aproxy[i].ckGetArrayID(); - \} -\end{alltt} - -One can make a section including the lower half elements of all three -arrays as follows: +To destroy an array element -- detach it from the array, +call its destructor, and release its memory--invoke its +\kw{Array destroy} method, as: \begin{alltt} - int aboundary=ArraySize/2; - int afloor=aboundary; - int aceiling=ArraySize-1; - int asectionSize=aceiling-afloor+1; - // cross section lower half of each array - CkArrayIndex **aelems= new CkArrayIndex*[3]; - aelems[0]= new CkArrayIndex[asectionSize]; - aelems[1]= new CkArrayIndex[asectionSize]; - aelems[2]= new CkArrayIndex[asectionSize]; - int *naelems=new int[3]; - for(int k=0;k<3;k++) - \{ - naelems[k]=asectionSize; - for(int i=afloor,j=0;i<=aceiling;i++,j++) - aelems[k][j]=CkArrayIndex1D(i); - \} - CProxySection\_multisectiontest\_array1d arrayLowProxy(3,aidArr,aelems,naelems); +a1[i].ckDestroy(); \end{alltt} - - -The resulting cross section proxy, as in the example \uw{arrayLowProxy}, -can then be used for multicasts in the same way as a normal array -section. - -Note: For simplicity the example has all arrays and sections of uniform -size. The size of each array and the number of elements in each array -within a section can all be set independently. - - +Note that this method can also be invoked remotely i.e. from +a process different from the one on which the array element resides. +You must ensure that no messages are sent to a deleted element. +After destroying an element, you may insert a new element at +its index. index 600bc2400278ea0a3e6fd132e23e0e1022b54e2f..9605e842a3502307e65f96f83d90722c2c897337 100644 (file) -\subsection{Callbacks} - \label{callbacks} -A callback is a generic way to transfer control back to a client -after a \charmpp{} library has finished. For example, after finishing a reduction, -you might want the results passed to some chare's entry method. -To do this, you create an object of type \kw{CkCallback} with -the chare's \kw{CkChareID} and entry method index, then pass the -callback object to the reduction library. - +Callbacks provide a generic way to store the information required to +invoke a communication target, such as a chare's entry method, at a +future time. Callbacks are often encountered when writing library +code, where they provide a simple way to transfer control back to a +client after the library has finished. For example, after finishing a +reduction, you may want the results passed to some chare's entry +method. To do this, you would create an object of type +\kw{CkCallback} with the chare's \kw{CkChareID} and entry method +index, and pass this callback object to the reduction library. -\subsubsection{Client Interface} +\section{Creating a CkCallback Object} +\label{sec:callbacks/creating} \index{CkCallback} -You can create a \kw{CkCallback} object in a number of ways, -depending on what you want to have happen when the callback is -finally invoked. The callback will be invoked with a \charmpp{} -message; but the message type will depend on the \charmpp{} library that -actually invokes the callback. Check the library documentation -to see what kind of message the library will send to your callback. -In any case, you are required to free the message passed to you via -the callback. - -The callbacks that go to chares require an entry method index'', -an integer that identifies which entry method will be called. -You can get an entry method index using the syntax: +There are several different types of \kw{CkCallback} objects; the type +of the callback specifies the intended behavior upon invocation of the +callback. Callbacks must be invoked with the \charmpp{} message of the +type specified when creating the callback. If the callback is being +passed into a library which will return its result through the +callback, it is the user's responsibility to ensure that the type of +the message delivered by the library is the same as that specified in +the callback. Messages delivered through a callback are not +automatically freed by the Charm RTS. They should be freed or stored +for future use by the user. + +Callbacks that target chares require an entry method index'', an +integer that identifies which entry method will be called. An entry +method index is the \charmpp{} version of a function pointer. The +entry method index can be obtained using the syntax: \begin{alltt} -\kw{myIdx}=CkIndex_\uw{ChareName}::\uw{EntryMethod}(\uw{parameters}); +\uw{int myIdx} = CkIndex_\uw{ChareName}::\uw{EntryMethod}(\uw{parameters}); \end{alltt} -Here, \uw{ChareName} is the name of the chare (group, or array) containing -the desired entry method, \uw{EntryMethod} is the name of that entry method, -and \uw{parameters} are the parameters taken by the method. -These parameters are only used to resolve the proper \uw{EntryMethod}; -they are otherwise ignored. An entry method index is the \charmpp{} -version of a function pointer. - - -There are a number of ways to build callbacks, depending on what you -want to have happen when the callback is invoked: +Here, \uw{ChareName} is the name of the chare (group, or array) +containing the desired entry method, \uw{EntryMethod} is the name of +that entry method, and \uw{parameters} are the parameters taken by the +method. These parameters are only used to resolve the proper +\uw{EntryMethod}; they are otherwise ignored. + +Under most circumstances, entry methods to be invoked through a +CkCallback must take a single message pointer as argument. As such, if +the entry method specified in the callback is not overloaded, using +NULL in place of parameters will suffice in fully specifying the +intended target. If the entry method is overloaded, a message pointer +of the appropriate type should be defined and passed in as a parameter +when specifying the entry method. The pointer does not need to be +initialized as the argument is only used to resolve the target entry +method. + +The intended behavior upon a callback's invocation is specified +through the choice of callback constructor used when creating the callback. +Possible constructors are: \begin{enumerate} -\item \kw{CkCallback(CkCallbackFn fn,void *param)} When invoked, the -callback will pass \uw{param} and the result message to the given C function, -which should have a prototype like: +\item \kw{CkCallback(void (*CallbackFn)(void *, void *), void *param)} - +When invoked, the callback will pass \uw{param} and the result message +to the given C function, which should have a prototype +like: \begin{alltt} -void \uw{myCallbackFn}(void *param,void *message) +void \uw{myCallbackFn}(void *param, void *message) \end{alltt} This function will be called on the processor where the callback was created, -so \uw{param} is allowed to point to heap-allocated data. Of course, you +so \uw{param} is allowed to point to heap-allocated data. Hence, this +constructor should be used only when it is known that the callback target (which by definition here +is just a C-like function) will be on the same processor as from where the constructor was called. +Of course, you are required to free any storage referenced by \uw{param}. -\item \kw{CkCallback(CkCallback::ignore)} When invoked, the callback -will do nothing. This can be useful if the \charmpp{} library requires a callback, -but you don't care when it finishes, or will find out some other way. +\item \kw{CkCallback(CkCallback::ignore)} - When invoked, the callback +will do nothing. This can be useful if a \charmpp{} library requires +a callback, but you don't care when it finishes, or will find out some +other way. -\item \kw{CkCallback(CkCallback::ckExit)} When invoked, the callback +\item \kw{CkCallback(CkCallback::ckExit)} When invoked, the callback will call CkExit(), ending the Charm++ program. -\item \kw{CkCallback(int ep,const CkChareID \&id)} When invoked, the -callback will send its message to the given entry method of the given -Chare. Note that a chare proxy will also work in place of a chare id: +\item \kw{CkCallback(int ep, const CkChareID \&id)} - When invoked, the +callback will send its message to the given entry method (specified by the +entry point index - \kw{ep}) of the given +Chare (specified by the chare \kw{id}). Note that a chare proxy will also work in place of a chare id: \begin{alltt} - CkCallback myCB(CkIndex_myChare::myEntry(NULL),myChareProxy); + CkCallback myCB(CkIndex_myChare::myEntry(NULL), myChareProxy); \end{alltt} -\item \kw{CkCallback(int ep,const CkArrayID \&id)} +\item \kw{CkCallback(int ep, const CkArrayID \&id)} - When invoked, the callback will broadcast its message to the given entry method of the given array. An array proxy will work in the place of an array id. -\item \kw{CkCallback(int ep,const CkArrayIndex \&idx,const CkArrayID \&id)} +\item \kw{CkCallback(int ep, const CkArrayIndex \&idx, const CkArrayID \&id)} - When invoked, the callback will send its message to the given entry method of the given array element. -\item \kw{CkCallback(int ep,const CkGroupID \&id)} +\item \kw{CkCallback(int ep, const CkGroupID \&id)} - When invoked, the callback will broadcast its message to the given entry method of the given group. -\item \kw{CkCallback(int ep,int onPE,const CkGroupID \&id)} -When invoked, -the callback will send its message to the given entry method -of the given group member. +\item \kw{CkCallback(int ep, int onPE, const CkGroupID \&id)} - +When invoked, the callback will send its message to the given entry +method of the given group member. \end{enumerate} -One final type of callback, a \kw{CkCallback(CkCallback::resumeThread)}, -can only be used from within threaded entry methods. This type of callback -is typically hidden within a thread-capable library, so is discussed further -in the library section. - +One final type of callback, \kw{CkCallbackResumeThread()}, can only be +used from within threaded entry methods. This callback type is +discussed in section \ref{sec:ckcallbackresumethread}. -\subsubsection{Library Interface} +\section{CkCallback Invocation} \label{libraryInterface} -Here, a library'' is simply any \charmpp{} code which can be called from several -different places. From the point of view of a library, a \kw{CkCallback} -is a destination for the library's result. \kw{CkCallback} objects can -be freely copied, marshalled, or even sent in messages. - -Postponing the discussion on threads for a moment, the only thing you can do -with a CkCallback is to move it around or send a message to it: +A properly initialized \kw{CkCallback} object stores a global +destination identifier, and as such can be freely copied, marshalled, +and sent in messages. Invocation of a CkCallback is done by calling +the function \kw{send} on the callback with the result message as an +argument. As an example, a library which accepts a CkCallback object +from the user and then invokes it to return a result may have the +following interface: \begin{alltt} //Main library entry point, called by asynchronous users: void myLibrary(...library parameters...,const CkCallback \&cb) \{ - ..start some parallel computation, dragging cb along... + ..start some parallel computation, store cb to be passed to myLibraryDone later... \} //Internal library routine, called when computation is done @@ -126,24 +142,32 @@ void myLibraryDone(...parameters...,const CkCallback \&cb) \end{alltt} A \kw{CkCallback} will accept any message type, or even NULL. The -message is immediately sent to the user's client function or entry point, -so you {\em do} need to document the type of message you will send to the -callback so the user knows what to expect. - -As an alternative to send'', the callback can be used in a {\em contribute} -collective operation. This will internally invoke the send'' method on the -callback when the contribute operation has finished. - -Threaded entry methods can be suspended and resumed through the -%are a bit more complicated as they must be suspended while waiting -%for the invoked operation to finish. -{\em CkCallbackResumeThread} class. {\em CkCallbackResumeThread} -is derived from {\em CkCallback} and has specific functionality for threads. -This class automatically suspends the thread when the destructor of the callback is called. -A suspended threaded client will resume when the send'' method is -invoked on the associated callback. -It can be used in situations when the return value is not needed, and only the -synchronization is important. For example: +message is immediately sent to the user's client function or entry +point. A library which returns its result through a callback should +have a clearly documented return message type. The type of the message +returned by the library must be the same as the type accepted by the +entry method specified in the callback. + +As an alternative to send'', the callback can be used in a {\em + contribute} collective operation. This will internally invoke the +send'' method on the callback when the contribute operation has +finished. + +For examples of how to use the various callback types, please +see \testreffile{megatest/callback.C} + +\section{Synchronous Execution with CkCallbackResumeThread} + +\label{sec:ckcallbackresumethread} + +Threaded entry methods can be suspended and resumed through the {\em + CkCallbackResumeThread} class. {\em CkCallbackResumeThread} is +derived from {\em CkCallback} and has specific functionality for +threads. This class automatically suspends the thread when the +destructor of the callback is called. A suspended threaded client +will resume when the send'' method is invoked on the associated +callback. It can be used in situations when the return value is not +needed, and only the synchronization is important. For example: \begin{alltt} // Call the "doWork" method and wait until it has completed @@ -153,7 +177,7 @@ void mainControlFlow() \{ doWork(...,CkCallbackResumeThread()); // or send a broadcast to a chare collection myProxy.doWork(...,CkCallbackResumeThread()); - // The thread is suspended until doWork calls 'send' on the callback + // callback goes out of scope; the thread is suspended until doWork calls 'send' on the callback ...some more work... \} @@ -221,9 +245,9 @@ void mainControlFlow() \{ \} \end{alltt} -{\em N.B.}: a {\em CkCallbackResumeThread} can be used to suspend a thread -only once. - +In all cases a {\em CkCallbackResumeThread} can be used to suspend a thread +only once.\\ +(See \examplerefdir{barnes-charm} for a complete example).\\ {\em Deprecated usage}: in the past, thread\_delay'' was used to retrieve the incoming message from the callback. While that is still allowed for backward compatibility, its usage is deprecated. The old usage is subject to memory index 55ce94b2fdb83808f2cb0b3a8193aa457e96bf32..2e043987ec36755f548a34caeadb1f91e8ea55c2 100644 (file) @@ -1,37 +1,44 @@ -\subsection{Chare Objects} +\section{Chare Objects} \index{chare}Chares are concurrent objects with methods that can be invoked -remotely. These methods are known as \index{entry method}entry methods, and -must be specified in the interface (\texttt{.ci}) file: +remotely. These methods are known as \index{entry method}entry methods. All +chares must have a constructor that is an entry method, and may have any +number of other entry methods. All chare classes and their entry methods are +declared in the interface (\texttt{.ci}) file: \begin{alltt} -chare ChareType -\{ - entry ChareType (\uw{parameters1}); - entry void EntryMethodName2 (\uw{parameters2}); -\}; + chare ChareType + \{ + entry ChareType(\uw{parameters1}); + entry void EntryMethodName(\uw{parameters2}); + \}; \end{alltt} -A corresponding \index{chare}chare definition in the \texttt{.h} file would -have the form: +Although it is {\em declared} in an interface file, a chare is a \CC{} object +and must have a normal \CC{} {\em implementation} (definition) in addition. A +chare class {\tt ChareType} must inherit from the class {\tt CBase\_ChareType}, +which is a special class that is generated by the \charmpp translator from the +interface file. Note that \CC{} namespace constructs can be used in the +interface file, as demonstrated in \examplerefdir{namespace}. + +To be concrete, the \CC{} definition of the \index{chare}chare above might have +the following definition in a \texttt{.h} file: \begin{alltt} class ChareType : public CBase\_ChareType \{ - // Data and member functions as in C++ - // One or more entry methods definitions of the form: - public: - ChareType(\uw{parameters2}) - \{ // C++ code block \} - void EntryMethodName2(\uw{parameters2}) - \{ // C++ code block \} + // Data and member functions as in C++ + public: + ChareType(\uw{parameters1}); + void EntryMethodName2(\uw{parameters2}); \}; \end{alltt} \index{chare} -Chares are concurrent objects encapsulating medium-grained units of -work. Chares can be dynamically created on any processor; there may +Each chare encapsulates data associated with medium-grained units of work in a +parallel application. +Chares can be dynamically created on any processor; there may be thousands of chares on a processor. The location of a chare is -usually determined by the dynamic load balancing strategy; however, +usually determined by the dynamic load balancing strategy. However, once a chare commences execution on a processor, it does not migrate to other processors\footnote{Except when it is part of an array.}. Chares do not have a default thread of @@ -40,16 +47,25 @@ chare execute in a message driven fashion upon the arrival of a message\footnote{Threaded methods augment this behavior since they execute in a separate user-level thread, and thus can block to wait for data.}. -The entry method definition specifies a function that is executed {\em -without interruption} when a message is received and scheduled for -processing. Only one message per chare is processed at a time. Entry -methods are defined exactly as normal \CC{} function members, except -that they must have the return value \kw{void} (except for the -constructor entry method which may not have a return value, and for a -{\em synchronous} entry method, which is invoked by a {\em threaded} -method in a remote chare) and they -must have exactly one argument which is a pointer to a message. - +The entry method definition specifies a function that is executed {\em without +interruption} when a message is received and scheduled for processing. Only one +message per chare is processed at a time. Entry methods are defined exactly as +normal \CC{} function members, except that they must have the return value +\kw{void} (except for the constructor entry method which may not have a return +value, and for a {\em synchronous} entry method, which is invoked by a {\em +threaded} method in a remote chare). Each entry method can either take no +arguments, take a list of arguments that the runtime system can automatically +pack into a message and send (see section~\ref{marshalling}), or take a single +argument that is a pointer to a \charmpp message (see section~\ref{messages}). + +A chare's entry methods can be invoked via {\it proxies} (see +section~\ref{proxies}). Proxies to a chare of type {\tt chareType} have type +{\tt CProxy\_chareType}. By inheriting from the CBase parent class, each chare +gets a {\tt thisProxy} member variable, which holds a proxy to itself. This +proxy can be sent to other chares, allowing them to invoke entry methods on this +chare. + +\zap{ Each chare instance is identified by a {\em handle} \index{handle} which is essentially a global pointer, and is unique across all processors. The handle of a chare has type \kw{CkChareID}. The @@ -63,130 +79,67 @@ for the superclass \kw{Chare} instead of \kw{CBase}\_\uw{ClassType}, although this form is not suggested. \kw{thishandle} can be used to set fields in a message. This mechanism allows chares to send their handles to other chares. +} -\subsubsection{Chare Creation} +\subsection{Chare Creation} \label{chare creation} -First, a \index{chare}chare needs to be declared, both in \texttt{.ci} file and -in \texttt{.h} file, as stated earlier. The following is an example of -declaration for a \index{chare}chare of user-defined type \uw{C}, where \uw{M1} -and \uw{M2} are user-defined \index{message}message types, and \uw{someEntry} -is an entry method. - -In the \texttt{mod.ci} file we have: - -\begin{alltt} -module mod \{ - chare C \{ - entry C(\uw{parameters}); - entry void someEntry(\uw{parameters}); - \}; -\} -\end{alltt} - -and in the \texttt{mod.h} file: - -\begin{alltt} -#include "mod.decl.h" -class C : public CBase\_C \{ - public: - C(\uw{parameters}); - void someEntry(\uw{parameters}); -\}; -\end{alltt} - -Now one can use the class \kw{CProxy}\_\uw{chareType} to create a new instance -of a \index{chare}chare. Here \uw{chareType} gets replaced with whatever -\index{chare}chare type we want. For the above example, proxies would be of -type \kw{CProxy}\_\uw{C}. A number of \index{chare}chare creation calls exist -as static or instance methods of class \kw{CProxy}\_\uw{chareType}: +Once you have declared and defined a chare class, you will want to create some +chare objects to use. Chares are created by the {\tt ckNew} method, which is a +static method of the chare's proxy class: \begin{alltt} - CProxy_chareType::ckNew(\uw{parameters}, CkChareID *vHdl, int destPE); + CProxy_chareType::ckNew(\uw{parameters}, int destPE); \end{alltt} -Each item above is optional, and: - -\begin{itemize} - -\item \uw{chareType} is the name of the type of \index{chare}chare to be -created. - -\item \uw{parameters} must correspond to the parameters for the \index{constructor}constructor entry method. If the constructor takes void, -pass nothing here. - -\item \uw{vHdl} is a pointer to a \index{chare}chare handle of type -\kw{CkChareID}, which is filled by the \kw{ckNew} method. This optional -argument can be used if the user desires to have a {\em virtual} handle -\index{virtual handle} to the instance of the \index{chare}chare that will be -created. This handle is useful for sending \index{message}messages to the -\index{chare}chare, even though it has not yet been created on any processor. -Messages sent to this virtual handle are either queued up to be sent to the -\index{chare}chare after it has been created, or simply redirected if the -\index{chare}chare has already been created. For performance reasons, -therefore, virtual handles should be used only when absolutely necessary. -Virtual handles are otherwise like normal \index{handle}handles, and may be -sent to other processors in \index{message}messages. - -\item \uw{destPE}: when a \index{chare}chare is to be created at a specific -processor, the \uw{destPE} is used to specify that processor. Note that, in -general, for good \index{load balancing}load balancing, the user should let -\charmpp{} determine the processor on which to create a \index{chare}chare. -Under unusual circumstances, however, the user may want to choose the -destination processor. If a process replicated on every processor is desired, -then a \index{chare group}chare group should be used. If no particular -processor is required, the parameter can be omitted, or \kw{CK\_PE\_ANY}. - -\end{itemize} +The {\tt parameters} correspond to the parameters of the chare's constructor. +Even if the constructor takes several arguments, all of the arguments should be +passed in order to {\tt ckNew}. If the constructor takes no arguments, the +parameters are omitted. By default, the new chare's location is determined by +the runtime system. However, this can be overridden by passing a value for +{\tt destPE}, which specifies the PE where the chare will be created. The \index{chare}chare creation method deposits the \index{seed}{\em seed} for a chare in a pool of seeds and returns immediately. The \index{chare}chare will be created later on some processor, as determined by the dynamic \index{load -balancing}load balancing strategy. When a \index{chare}chare is created, it is -initialized by calling its \index{constructor}constructor \index{entry -method}entry method with the \index{message}message parameter specified to the -\index{chare}chare creation method. The method operator does not return any -value but fills in the \index{virtual handle}virtual handle to the newly -created \index{chare}chare if specified. +balancing}load balancing strategy (or by {\tt destPE}). +When a \index{chare}chare is created, it is +initialized by calling its \index{constructor}constructor \index{entry +method}entry method with the parameters specified by {\tt ckNew}. -The following are some examples on how to use the \index{chare}chare creation -method to create chares. +Suppose we have declared a chare class {\tt C} with a constructor that takes two +arguments, an {\tt int} and a {\tt double}. \begin{enumerate} \item{This will create a new \index{chare}chare of type \uw{C} on {\em any} -processor:} +processor and return a proxy to that chare:} \begin{alltt} - CProxy_C chareProxy = CProxy_C::ckNew(\uw{parameters}); + CProxy_C chareProxy = CProxy_C::ckNew(1, 10.0); \end{alltt} \item{This will create a new \index{chare}chare of type \uw{C} on processor -\kw{destPE}:} +\kw{destPE} and return a proxy to that chare:} \begin{alltt} - CProxy_C chareProxy = CProxy_C::ckNew(\uw{parameters}, destPE); -\end{alltt} - -\item{The following first creates a \kw{CkChareID} \uw{cid}, -then creates a new \index{chare}chare of type \uw{C} on processor \uw{destPE}:} - -\begin{alltt} - CkChareID cid; - CProxy_C::ckNew(\uw{parameters}, \&cid, destPE); - CProxy_C chareProxy(cid); + CProxy_C chareProxy = CProxy_C::ckNew(1, 10.0, destPE); \end{alltt} \end{enumerate} -\subsubsection{Method Invocation on Chares} +For an example of chare creation in a full application, see +\examplerefdir{fib} in the \charmpp software distribution, which +calculates fibonacci numbers in parallel. -A message \index{message} may be sent to a \index{chare}chare using the -notation: +\subsection{Method Invocation on Chares} -\begin{tabbing} -chareProxy.EntryMethod(\uw{parameters}) -\end{tabbing} +A message \index{message} may be sent to a \index{chare}chare through a proxy +object using the notation: + +\begin{alltt} + chareProxy.EntryMethod(\uw{parameters}) +\end{alltt} This invokes the entry method \uw{EntryMethod} on the chare referred to by the proxy \uw{chareProxy}. This call @@ -194,28 +147,18 @@ is asynchronous and non-blocking; it returns immediately after sending the message. -\subsubsection{Local Access} +\subsection{Local Access} -\experimental{} You can get direct access to a local chare using the proxy's \kw{ckLocal} method, which returns an ordinary \CC\ pointer -to the chare if it exists on the local processor; and NULL if -the chare does not exist or is on another processor. +to the chare if it exists on the local processor, and NULL otherwise. \begin{alltt} -C *c=chareProxy.ckLocal(); -if (c==NULL) //...is remote-- send message -else //...is local-- directly use members and methods of c + C *c=chareProxy.ckLocal(); + if (c==NULL) { + // object is remote; send message + } else { + // object is local; directly use members and methods of c + } \end{alltt} - - - - - - - - - - - index 5ee7c082d8366007130d51514f893af4051f46ce..33c430851e5cf115a8a6f017a275b6b65e4f7ebc 100644 (file) @@ -1,24 +1,63 @@ -\section{Checkpoint/Restart} - -\index{Checkpoint/Restart} -\label{sec:checkpoint} - -\charmpp{} offers a range of fault tolerance capabilities through its -checkpoint/restart mechanism. Usual Chare array-based \charmpp{} application -including AMPI application can be checkpointed to disk files and later -on restarting from the files. - -The basic idea behind this is straightforward: Checkpointing an -application is like migrating its parallel objects from the processors -onto disks, and restarting is the reverse. Thanks to the migration -utilities like PUP'ing(Section~\ref{sec:pup}), users can decide what -data to save in checkpoints and how to save them. However, unlike migration (where certain objects do not need a PUP method), checkpoint requires all the objects to implement the PUP method. - -Two schemes of fault tolerance protocols are implemented. - -\subsection{Disk-based Checkpoint/Restart} - -\subsubsection{Checkpointing} +\charmpp{} offers a couple of checkpoint/restart mechanisms. Each +of these targets a specific need in parallel programming. However, +both of them are based on the same infrastructure. + +Traditional chare-array-based \charmpp{} applications, including AMPI +applications, can be checkpointed to storage buffers (either files or +memory regions) and be restarted later from those buffers. The basic +idea behind this is straightforward: checkpointing an application is +like migrating its parallel objects from the processors onto buffers, +and restarting is the reverse. Thanks to the migration utilities like +PUP methods (Section~\ref{sec:pup}), users can decide what data to +save in checkpoints and how to save them. However, unlike migration +(where certain objects do not need a PUP method), checkpoint requires +all the objects to implement the PUP method. + +The two checkpoint/restart schemes implemented are: +\begin{itemize} +\item Shared filesystem: provides support for \emph{split execution}, where the +execution of an application is interrupted and later resumed. +\item Double local-storage: offers an online \emph{fault +tolerance} mechanism for applications running on unreliable machines. +\end{itemize} + +\section{Split Execution} + +There are several reasons for having to split the execution of an +application. These include protection against job failure, a single +execution needing to run beyond a machine's job time limit, and +resuming execution from an intermediate point with different +parameters. All of these scenarios are supported by a mechanism to +record execution state, and resume execution from it later. + +Parallel machines are assembled from many complicated components, each +of which can potentially fail and interrupt execution +unexpectedly. Thus, parallel applications that take long enough to run +from start to completion need to protect themselves from losing work +and having to start over. They can achieve this by periodically taking +a checkpoint of their execution state from which they can later +resume. + +Another use of checkpoint/restart is where the total execution +time of the application exceeds the maximum allocation time for a job +in a supercomputer. For that case, an application may checkpoint +before the allocation time expires and then restart from the +checkpoint in a subsequent allocation. + +A third reason for having a split execution is when an application +consists in \emph{phases} and each phase may be run a different number +of times with varying parameters. Consider, for instance, an +application with two phases where the first phase only has a possible +configuration (it is run only once). The second phase may have several +configuration (for testing various algorithms). In that case, once the +first phase is complete, the application checkpoints the +result. Further executions of the second phase may just resume from +that checkpoint. + +An example of \charmpp{}'s support for split execution can be seen +in \testrefdir{chkpt/hello}. + +\subsection{Checkpointing} \label{sec:diskcheckpoint} The API to checkpoint the application is: @@ -27,170 +66,212 @@ Two schemes of fault tolerance protocols are implemented. void CkStartCheckpoint(char* dirname,const CkCallback& cb); \end{alltt} -The string {\it dirname} is the destination directory where the checkpoint -files will be stored, and {\it cb} is the callback function which will be -invoked after the checkpoint is done, as well as when the restart is -complete. Here is an example of a typical use: +The string {\it dirname} is the destination directory where the +checkpoint files will be stored, and {\it cb} is the callback function +which will be invoked after the checkpoint is done, as well as when +the restart is complete. Here is an example of a typical use: \begin{alltt} - . . . - CkCallback cb(CkIndex_Hello::SayHi(),helloProxy); + . . . CkCallback cb(CkIndex_Hello::SayHi(),helloProxy); CkStartCheckpoint("log",cb); \end{alltt} -A chare array usually has a PUP routine for the sake of migration. -The PUP routine is also used in the checkpointing and restarting process. -Therefore, it is up to the programmer what to save and restore for -the application. One illustration of this flexbility is a complicated -scientific computation application with 9 matrices, 8 of which holding -the intermediate results and 1 holding the final results of each timestep. -To save resource, the PUP routine can well omit the 8 intermediate matrices -and checkpoint the matrix with final results of each timestep. - -Group and nodegroup objects(Section~\ref{sec:group}) are normally not -meant to be migrated. In order to checkpoint them, however, the user -wants to write PUP routines for the groups and declare them as -{\tt [migratable]} in the .ci file. Some programs use {\it mainchares} -to hold key control data like global object counts, and thus needs -mainchares be checkpointed too. To do this, the programmer should write -a PUP routine for the mainchare and declare them as {\tt [migratable]} -in the .ci file, just as in the case of Group and NodeGroup. In addition, -the programmer also needs to put the proxy to the mainchare (usually -noted as mainproxy) as a read-only data in the code, and make sure -processor 0, which holds the mainchare, initiates the checkpoint. - -After {\tt CkStartCheckpoint} is executed, a directory of the designated -name is created and a collection of checkpoint files are written into it. - -\subsubsection{Restarting} - -The user can choose to run the \charmpp{} application in restart mode, i.e., -restarting execution from last checkpoint. The command line option {\tt --restart DIRNAME} is required to invoke this mode. For example: +A chare array usually has a PUP routine for the sake of migration. +The PUP routine is also used in the checkpointing and restarting +process. Therefore, it is up to the programmer what to save and +restore for the application. One illustration of this flexbility is a +complicated scientific computation application with 9 matrices, 8 of +which hold intermediate results and 1 that holds the final results +of each timestep. To save resources, the PUP routine can well omit +the 8 intermediate matrices and checkpoint the matrix with the final +results of each timestep. + +Group and nodegroup objects (Section~\ref{sec:group}) are normally not +meant to be migrated. In order to checkpoint them, however, the user +has to write PUP routines for the groups and declare them as {\tt +[migratable]} in the .ci file. Some programs use {\it mainchares} to +hold key control data like global object counts, and thus mainchares +need to be checkpointed too. To do this, the programmer should write a +PUP routine for the mainchare and declare them as {\tt [migratable]} +in the .ci file, just as in the case of Group and NodeGroup. + +The checkpoint must be recorded at a synchronization point in the +application, to ensure a consistent state upon restart. One easy way +to achieve this is to synchronize through a reduction to a single +chare (such as the mainchare used at startup) and have that chare make +the call to initiate the checkpoint. + +After {\tt CkStartCheckpoint} is executed, a directory of the +designated name is created and a collection of checkpoint files are +written into it. + +\subsection{Restarting} + +The user can choose to run the \charmpp{} application in restart mode, +i.e., restarting execution from a previously-created checkpoint. The +command line option {\tt +restart DIRNAME} is required to invoke this +mode. For example: \begin{alltt} > ./charmrun hello +p4 +restart log \end{alltt} -Restarting is the reverse process of checkpointing. \charmpp{} allows -restarting the old checkpoint on different number of physical processor. -This provides the flexibility to expand or shrink your application when -the availability of computing resource changes. - -Note that on restart, if the old reduction client was set to a static -function, the function pointer might be lost and the user needs to register -it again. A better alternative is to always use entry method of a chare -object. Since all the entry methods are registered inside \charmpp{} system, -in restart phase, the reduction client will be automatically restored. - -After a failure, the system may consist less number of processors. After -a problem fixed, some processors may become available again. Therefore, -the user may need to flexibility to restart on different number of processors -than in the checkpointing phase. This is allowable by giving different -{\tt +pN} option at runtime. One thing to note is that the new load -distribution might differ from the previous one at checkpoint time, -so running a load balancing (See Section~\ref{loadbalancing}) is suggested. - -If restart is not done on the same number of processors, the processor-specific -data in a group/nodegroup branch cannot (and usually should not) be -restored individually. A copy from processor 0 will be propagate to all -the processors. - -\subsubsection{Choosing What to Save} - -In your programs, you may use chare groups for different types of purposes. -For example, groups holding read-only data can avoid excessive data copying, -while groups maintaining processor-specific information is used as a local -manager of the processor. In the latter situation, the data is sometimes -too complicated to save and restore but easy to re-compute. For the read-only -data, you want to save and restore it in the PUP'er routing and leave empty -the migration constructor, via which the new object is created during restart. -For the easy-to-recompute type of data, we just omit the PUP'er routine and -do the data reconstruction in the group's migration constructor. - -A similar example is the program mentioned above, where there aree two -types of chare arrays, one maintaining intermediate results while the -other type holding the final result for each timestep. The programmer -can take advantage of the flexibility by omitting PUP'er routine empty -for intermediate objects, and do save/restore only for the important -objects. - -\subsection{Double Memory/Disk Checkpoint/Restart} - +Restarting is the reverse process of checkpointing. \charmpp{} allows +restarting the old checkpoint on a different number of physical +processors. This provides the flexibility to expand or shrink your +application when the availability of computing resources changes. + +Note that on restart, if an array or group reduction client was set to a static +function, the function pointer might be lost and the user needs to +register it again. A better alternative is to always use an entry method +of a chare object. Since all the entry methods are registered +inside \charmpp{} system, in the restart phase, the reduction client +will be automatically restored. + +After a failure, the system may contain fewer or more +processors. Once the failed components have been repaired, some +processors may become available again. Therefore, the user may need +the flexibility to restart on a different number of processors than in +the checkpointing phase. This is allowable by giving a different {\tt ++pN} option at runtime. One thing to note is that the new load +distribution might differ from the previous one at checkpoint time, so +running a load balancer (see Section~\ref{loadbalancing}) after +restart is suggested. + +If restart is not done on the same number of processors, the +processor-specific data in a group/nodegroup branch cannot (and +usually should not) be restored individually. A copy from processor 0 +will be propagated to all the processors. + +\subsection{Choosing What to Save} + +In your programs, you may use chare groups for different types of +purposes. For example, groups holding read-only data can avoid +excessive data copying, while groups maintaining processor-specific +information are used as a local manager of the processor. In the +latter situation, the data is sometimes too complicated to save and +restore but easy to re-compute. For the read-only data, you want to +save and restore it in the PUP'er routine and leave empty the +migration constructor, via which the new object is created during +restart. For the easy-to-recompute type of data, we just omit the +PUP'er routine and do the data reconstruction in the group's migration +constructor. + +A similar example is the program mentioned above, where there are two +types of chare arrays, one maintaining intermediate results while the +other type holds the final result for each timestep. The programmer +can take advantage of the flexibility by leaving PUP'er routine empty +for intermediate objects, and do save/restore only for the important +objects. + +\section{Online Fault Tolerance} \label{sec:MemCheckpointing} - -The previous disk-based fault-tolerance scheme is a very basic scheme in -that when a failure occurs, the whole program gets killed and the user has to -manually restart the application from the checkpoint files. -The double checkpoint/restart protocol described in this subsection -provides an automatic fault tolerance solution. When a failure occurs, -the program can automatically detect the failure and restart from the -checkpoint. -Further, this fault-tolerance protocol does not rely on any reliable -storage (as needed in the previous method). -Instead, it stores two copies of checkpoint data to two different -locations (can be memory or disk). -This double checkpointing ensures the availability of one checkpoint in case -the other is lost. -The double in-memory checkpoint/restart scheme is useful and efficient -for applications with small memory footprint at the checkpoint state. -The double in-disk variation stores checkpoints into local disk, thus -can be useful for applications with large memory footprint. +As supercomputers grow in size, their reliability decreases +correspondingly. This is due to the fact that the ability to assemble +components in a machine surpasses the increase in reliability per +component. What we can expect in the future is that applications will +run on unreliable hardware. + +The previous disk-based checkpoint/restart can be used as a fault +tolerance scheme. However, it would be a very basic scheme in that +when a failure occurs, the whole program gets killed and the user has +to manually restart the application from the checkpoint files. The +double local-storage checkpoint/restart protocol described in this +subsection provides an automatic fault tolerance solution. When a +failure occurs, the program can automatically detect the failure and +restart from the checkpoint. Further, this fault-tolerance protocol +does not rely on any reliable external storage (as needed in the previous +method). Instead, it stores two copies of checkpoint data to two +different locations (can be memory or local disk). This double +checkpointing ensures the availability of one checkpoint in case the +other is lost. The double in-memory checkpoint/restart scheme is +useful and efficient for applications with small memory footprint at +the checkpoint state. The double in-disk variant stores checkpoints +into local disk, thus can be useful for applications with large memory +footprint. %Its advantage is to reduce the recovery %overhead to seconds when a failure occurs. %Currently, this scheme only supports Chare array-based Charm++ applications. -\subsubsection{Checkpointing} - -The function that user can call to initiate a checkpointing in a Chare -array-based application is: +\subsection{Checkpointing} +The function that application developers can call to record a checkpoint in a +chare-array-based application is: \begin{alltt} void CkStartMemCheckpoint(CkCallback &cb) \end{alltt} - -where {\it cb} has the same meaning as in the Section~\ref{sec:diskcheckpoint} . -Just like the above disk checkpoint described, it is up to programmer what to save. -The programmer is responsible for choosing when to activate checkpointing so that -the size of a global checkpoint state can be minimal. - -In AMPI applications, user just needs to call the following function to -start checkpointing: - +where {\it cb} has the same meaning as in +section~\ref{sec:diskcheckpoint}. Just like the above disk checkpoint +described, it is up to the programmer to decide what to save. The +programmer is responsible for choosing when to activate checkpointing +so that the size of a global checkpoint state, and consequently the +time to record it, is minimized. + +In AMPI applications, the user just needs to call the following +function to record a checkpoint: \begin{alltt} void AMPI_MemCheckpoint() \end{alltt} -\subsubsection{Restarting} +\subsection{Restarting} When a processor crashes, the restart protocol will be automatically -invoked to recover all objects using the last checkpoints. And then the program -will continue to run on the survived processors. This is based on the assumption -that there are no extra processors to replace the crashed ones. - -However, if there are a pool of extra processors to replace the crashed ones, -the fault-toerlance protocol can also take advantage of this to grab one -free processor and let the program run on the same number of processors -as before crash. -In order to achieve this, \charmpp{} needs to be compiled with the macro option - {\it CK\_NO\_PROC\_POOL} turned on. +invoked to recover all objects using the last checkpoints. The program +will continue to run on the surviving processors. This is based on the +assumption that there are no extra processors to replace the crashed +ones. + +However, if there are a pool of extra processors to replace the +crashed ones, the fault-tolerance protocol can also take advantage of +this to grab one free processor and let the program run on the same +number of processors as before the crash. In order to achieve +this, \charmpp{} needs to be compiled with the macro option {\it +CK\_NO\_PROC\_POOL} turned on. + +\subsection{Double in-disk checkpoint/restart} + +A variant of double memory checkpoint/restart, {\it double in-disk +checkpoint/restart}, can be applied to applications with large memory +footprint. In this scheme, instead of storing checkpoints in the +memory, it stores them in the local disk. The checkpoint files are +named ckpt[CkMyPe]-[idx]-XXXXX'' and are stored under the /tmp +directory. + +Users can pass the runtime option {\it +ftc\_disk} to activate this +mode. For example: +\begin{alltt} + ./charmrun hello +p8 +ftc_disk +\end{alltt} -\subsubsection{Double in-disk checkpoint/restart} +\subsection{Building Instructions} +In order to have the double local-storage checkpoint/restart +functionality available, the parameter \emph{syncft} must be provided +at build time: -A variation of double memory checkpoint/restart, -{\it double in-disk checkpoint/restart}, -can be applied to applcaitions with large memory footprint. -In this scheme, instead of storing checkpoints in the memory, it stores -them in the local disk. -The checkpoint files are named "ckpt[CkMyPe]-[idx]-XXXXXX" and are stored under /tmp. +\begin{alltt} + ./build charm++ net-linux-x86_64 syncft +\end{alltt} -A programmer can use runtime option {\it +ftc\_disk} to switch to this mode. -For example: +At present, only a few of the machine layers underlying the \charmpp{} +runtime system support resilient execution. These include the +TCP-based \texttt{net} builds on Linux and Mac OS X. + +\subsection{Failure Injection} +To test that your application is able to successfully recover from +failures using the double local-storage mechanism, we provide a +failure injection mechanism that lets you specify which PEs will fail +at what point in time. You must create a text file with two +columns. The first colum will store the PEs that will fail. The second +column will store the time at which the corresponding PE will +fail. Make sure all the failures occur after the first checkpoint. The +runtime parameter \emph{kill\_file} has to be added to the command +line along with the file name: \begin{alltt} - ./charmrun hello +p8 +ftc_disk + ./charmrun hello +p8 +kill_file <file> \end{alltt} - +An example of this usage can be found in the \texttt{syncfttest} +targets in \testrefdir{jacobi3d}. diff --git a/doc/charm++/ckloop.tex b/doc/charm++/ckloop.tex new file mode 100644 (file) index 0000000..c404a17 --- /dev/null @@ -0,0 +1,64 @@ +To better utilize the multicore chip, it has become increasingly popular to +adopt shared-memory multithreading programming methods to exploit parallelism +on a node. For example, in hybrid MPI programs, OpenMP is the most popular +choice. When launching such hybrid programs, users have to make sure there are +spare physical cores allocated to the shared-memory mulithreading runtime. +Otherwise, the runtime that handles distributed-memory programming may +interfere with resource contention because the two independent runtime systems +are not coordinated. If spare cores are allocated, just in the same way of +launching a MPI+OpenMP hybrid program, \charmpp{} will work perfectly with any +shared-memory parallel programming languages (e.g., OpenMP). + +If there are no spare cores allocated, to avoid the resouce contention, a +\emph{unified runtime} is needed to supoort both the intra-node shared-memory +multithreading parallelism and the inter-node distributed-memory +message-passing parallelism. Additionally, considering a parallel application +may just have a small fraction of its computation but critical that could be +ported to shared-memory parallelism (the save on the critical computation may +also reduce the communication cost, thus leading to more performance +improvement), dedicating physical cores on every node to shared-memory +multithreading runtime will cause a waste on computation power because those +dedicated cores are not utilized at all during the most of application's +execution time period. This case also indicates the necessity of a unified +runtime so that both types of parallelism are supported. + +\emph{CkLoop} library is an add-on to the \charmpp{} runtime to achieve such a +unified runtime. The library implements a simple OpenMP-like shared-memory +multithreading runtime that re-uses \charmpp{} PEs to perform tasks spawned by +the multithreading runtime. This library targets to be used in the \charmpp{} +SMP mode. + +The \emph{CkLoop} library is built in +\$CHARM\_DIR/\$MACH\_LAYER/tmp/libs/ck-libs/ckloop by executing make'' +comamnd. To use it for user applications, one has to include CkLoopAPI.h'' in +the source codes. The interface functions of this library are explained as +follows: \begin{itemize} \item CProxy\_FuncCkLoop \textbf{CkLoop\_Init}(int +numThreads=0) : This function initializes the CkLoop library, and it only needs +to be called once on a single PE during the initilization phase of the +application. The argument numThreads'' is only used in the \charmpp{} +non-SMP mode, specifying the number of threads to be created for the +single-node shared-memory parallelism. It will be ignored in the SMP mode. + +\item void \textbf{CkLoop\_Exit}(CProxy\_FuncCkLoop ckLoop): This function is +intended to be used in the non-SMP mode of \charmpp{} as it frees the resources +(e.g., terminating the spawned threads) used by the CkLoop library. It should +be called on just one PE. + +\item void \textbf{CkLoop\_Parallelize}( \\ HelperFn func, /* the function that +finishes a partial work on another thread */ \\ int paramNum, void * param, /* +the input parameters for the above func */ \\ int numChunks, /* number of +chunks to be partitioned */ int lowerRange, int upperRange, /* the loop-like +parallelization happens in [lowerRange, upperRange] */ \\ int sync=1, /* +control the on/off of the implicit barrier after each parallelized loop */ \\ +void *redResult=NULL, REDUCTION\_TYPE type=CKLOOP\_NONE /* the reduction +result, ONLY SUPPORT SINGLE VAR of TYPE int/float/double */ \\): The +HelperFn'' is defined as typedef void (*HelperFn)(int first,int last, void +*result, int paramNum, void *param);'' The result'' is the buffer for +reduction result on a single simple-type variable. \end{itemize} + +Examples of using this library can be found in \examplerefdir{ckloop} and the +widely-used molecule dynamics simulation +application--NAMD\footnote{http://www.ks.uiuc.edu/Research/namd} + + + index 8902612dc45fa026d24496860906ad7607a97a3c..dc0a442947a354e6a5c7aa65e1e4c23304b5c17f 100644 (file) @@ -34,7 +34,7 @@ information has to be kept, either the CProxy or the ComlibInstanceHandle, and this can be done in readonly variables, or as internal variables of the objects. An example on how to use commlib can be found in the charm distribution, under -examples/charm++/commlib/multicast/ '', where the proxies are associated in the +\examplerefdir{commlib/multicast}, where the proxies are associated in the chare arrays. @@ -452,7 +452,7 @@ sequence the destinations of the multicast to minimize contention on a network. In order to use these strategies, the message sent must inherit from class {\textrm{CkMcastBaseMsg}}. (For an example see -examples/charm++/commlib/multicast/''). +\examplerefdir{commlib/multicast}). These are the subclass strategies that are available: similarity index 91% rename from doc/install/compile.tex rename to doc/charm++/compile.tex index 23233316e4999d2b0442da22a13c5c60c36ad19d..34021463af939e140aa0173c4a094c16d752e3c6 100644 (file) @@ -1,5 +1,3 @@ -\section{Compiling \charmpp{} Programs} - The {\tt charmc} program, located in charm/bin'', standardizes compiling and linking procedures among various machines and operating systems. charmc'' is @@ -12,14 +10,14 @@ has to add some command-line options in addition to the simplified syntax shown below. The options are described next. \begin{alltt} - * Compile C charmc -o pgm.o pgm.c - * Compile C++ charmc -o pgm.o pgm.C - * Link charmc -o pgm obj1.o obj2.o obj3.o... - * Compile + Link charmc -o pgm src1.c src2.ci src3.C - * Create Library charmc -o lib.a obj1.o obj2.o obj3.o... - * CPM preprocessing charmc -gen-cpm file.c - * Translate Charm++ Interface File charmc file.ci + * Compile C charmc -o pgm.o pgm.c + * Compile C++ charmc -o pgm.o pgm.C + * Link charmc -o pgm obj1.o obj2.o obj3.o... + * Compile + Link charmc -o pgm src1.c src2.ci src3.C + * Create Library charmc -o lib.a obj1.o obj2.o obj3.o... + * Translate Charm++ Interface File charmc file.ci \end{alltt} +% * CPM preprocessing charmc -gen-cpm file.c Charmc automatically figures out where the charm lib and include directories are --- at no point do you have to configure this @@ -118,7 +116,7 @@ example). Usually, charmc prefers the less buggy of the two, but not always. This option causes charmc to switch to the most reliable compiler, regardless of whether it produces slow code or not. -\item[{\tt -language \{converse|charm++|sdag|ampi|fem|f90charm\}}:] +\item[{\tt -language \{converse|charm++|ampi|fem|f90charm\}}:] When linking with charmc, one must specify the language''. This is just a way to help charmc include the right libraries. Pick the @@ -127,7 +125,6 @@ is just a way to help charmc include the right libraries. Pick the \begin{itemize} \item{{\bf Charm++} if your program includes \charmpp{}, C++, and C.} \item{{\bf Converse} if your program includes C or C++.} -\item{{\bf sdag} if your program includes structured dagger.} \item{{\bf f90charm} if your program includes f90 Charm interface.} \end{itemize} index ae27446439177a734d4485002eca79ae0e3141f6..18827c3f4c06476d8bdb6fb6d2f1bede0d3176a3 100644 (file) @@ -1,9 +1,4 @@ -\section{Control Point Automatic Tuning Framework} - -\index{Control Point Automatic Tuning Framework} -\label{sec:controlpoint} - - +\experimental{} \charmpp{} currently includes an experimental automatic tuning framework that can dynamically adapt a program at runtime to improve its performance. The program provides a set of tunable knobs that are @@ -15,7 +10,7 @@ the possible program configurations. \textbf{Warning: this is still an experimental feature not meant for production applications} -\subsection{Exposing Control Points in a Charm++ Program} +\section{Exposing Control Points in a Charm++ Program} The program should include a header file before any of its \texttt{*.decl.h} files: \begin{alltt} @@ -52,7 +47,7 @@ For a complete list of these functions, see \texttt{cp\_effects.h} in \texttt{ch The program, of course, has to adapt its behavior to use these new control point values. There are two ways for a the control point values to change over time. The program can request that a new phase (with its own control point values) be used whenever it wants, or the control point framework can automatically advance to a new phase periodically. The structure of the program will be slightly different in these to cases. Sections \ref{frameworkAdvancesPhases} and \ref{programAdvancesPhases} describe the additional changes to the program that should be made for each case. -\subsubsection{Control Point Framework Advances Phases} +\subsection{Control Point Framework Advances Phases} \label{frameworkAdvancesPhases} The program provides a callback to the control point framework in a manner such as this: @@ -72,7 +67,7 @@ Alternatively, the program can specify that it wants to call \texttt{gotoNextPha \end{alltt} -\subsubsection{Program Advances Phases} +\subsection{Program Advances Phases} \label{programAdvancesPhases} \begin{alltt} @@ -83,11 +78,11 @@ Alternatively, the program can specify that it wants to call \texttt{gotoNextPha -\subsection{Linking With The Control Point Framework} +\section{Linking With The Control Point Framework} The control point tuning framework is now an integral part of the Charm++ runtime system. It does not need to be linked in to an application in any special way. It contains the framework code responsible for recording information about the running program as well as adjust the control point values. The trace module will enable measurements to be gathered including information about utilization, idle time, and memory usage. -\subsection{Runtime Command Line Arguments} +\section{Runtime Command Line Arguments} Various following command line arguments will affect the behavior of the program when running with the control point framework. As this is an experimental framework, these are subject to change. diff --git a/doc/charm++/credits.tex b/doc/charm++/credits.tex new file mode 100644 (file) index 0000000..3a74d38 --- /dev/null @@ -0,0 +1,80 @@ +\begin{itemize} +\item Aaron Becker +\item Abhinav Bhatele +\item Abhishek Gupta +\item Akhil Langer +\item Amit Sharma +\item Anshu Arya +\item Artem Shvorin +\item Arun Singla +\item Attila Gursoy +\item Chao Huang +\item Chao Mei +\item Chee Wai Lee +\item David Kunzman +\item Dmitriy Ofman +\item Edgar Solomonik +\item Ehsan Totoni +\item Emmanuel Jeannot +\item Eric Bohm +\item Eric Shook +\item Esteban Meneses +\item Esteban Pauli +\item Filippo Gioachin +\item Gengbin Zheng +\item Greg Koenig +\item Gunavardhan Kakulapati +\item Hari Govind +\item Harshitha Menon +\item Isaac Dooley +\item Jayant DeSouza +\item Jeffrey Wright +\item Jim Phillips +\item Jonathan Booth +\item Jonathan Lifflander +\item Joshua Unger +\item Josh Yelon +\item Laxmikant Kale +\item Lixia Shi +\item Lukasz Wesolowski +\item Mani Srinivas Potnuru +\item Milind Bhandarkar +\item Minas Charalambides +\item Narain Jagathesan +\item Neelam Saboo +\item Nihit Desai +\item Nikhil Jain +\item Nilesh Choudhury +\item Orion Lawlor +\item Osman Sarood +\item Parthasarathy Ramachandran +\item Phil Miller +\item Pritish Jetley +\item Puneet Narula +\item Rahul Joshi +\item Ralf Gunter +\item Ramkumar Vadali +\item Ramprasad Venkataraman +\item Rashmi Jyothi +\item Robert Blake +\item Robert Brunner +\item Rui Liu +\item Ryan Mokos +\item Sameer Kumar +\item Sameer Paranjpye +\item Sanjeev Krishnan +\item Sayantan Chakravorty +\item Sindhura Bandhakavi +\item Tarun Agarwal +\item Terry L. Wilmarth +\item Theckla Louchios +\item Tim Hinrichs +\item Timothy Knauff +\item Vikas Mehta +\item Viraj Paropkari +\item Xiang Ni +\item Yanhua Sun +\item Yan Shi +\item Yogesh Mehta +\item Zheng Shao +\end{itemize} index edae82e61c8de22114f3070b8f9e3aaa270bad1c..3ad3e2171bf5be0a1b9de0355afb5502844d6599 100644 (file) @@ -1,8 +1,3 @@ -\subsection{Delegation} - -\index{Delegation} -\label{delegation} - {\em Delegation} is a means by which a library writer can intercept messages sent via a proxy. This is typically used to construct communication libraries. @@ -15,7 +10,7 @@ very small client-side interface to enable delegation, and a more complex manager-side interface to handle the resulting redirected messages. -\subsubsection{Client Interface} +\section{Client Interface} All proxies (Chare, Group, Array, ...) in \charmpp\ support the following delegation routines. @@ -67,7 +62,7 @@ and messages for virtual chares that have not yet been created are never delegated. Instead, these kinds of entry methods execute as usual, even if the proxy is delegated. -\subsubsection{Manager Interface} +\section{Manager Interface} A delegation manager is a group which inherits from \kw{CkDelegateMgr} and overrides certain virtual methods. index 45dfef88b1b7d5b17dec86e5b82359f1e555f656..a907204677ea9d88f4b7bdf60a83d6431e7c1aea 100644 (file) @@ -1,31 +1,4 @@ -\subsection{Entry Methods} - -\label{entry} - -In \charmpp, \index{chare}chares, \index{group}groups and \index{nodegroup} -nodegroups communicate using remote method invocation. These remote entry'' methods may either take marshalled parameters, described in the next section; or special objects called messages. Messages are lower level, more efficient, more flexible, and more difficult to use than parameter marshalling. - -An entry method is always a part of a chare-- -there are no global entry methods in \charmpp{}. -Entry methods are declared in the the interface file as: - -\begin{alltt} -entry void \uw{Entry1}(\uw{parameters}); -\end{alltt} - -\uw{Parameters} is either a list of marshalled parameters, -(e.g., int i, double x''), or a message description (e.g., -MyMessage *msg''). See section~\ref{marshalling} and -section~\ref{messages} for details on these types of -parameters. - -Entry methods typically do not return data-- in \CC, they have -return type void''. An entry method with the same name -as its enclosing class is a constructor. Constructors in \CC -have no return type. Finally, sync methods, described below, -may return a message. - -\subsubsection{Entry Method Attributes} +\section{Entry Method Attributes} \label{attributes} @@ -44,82 +17,106 @@ an entry method: \kw{threaded}, \kw{sync}, \kw{exclusive}, \kw{nokeep}, \kw{notrace}, \kw{immediate}, \kw{expedited}, \kw{inline}, \kw{local}, \kw{python}. \begin{description} -\index{threaded}\item[threaded] \index{entry method}entry methods are -entry methods which are run in their own nonpreemptible threads. These +\index{threaded}\item[threaded] \index{entry method}entry methods +run in their own non-preemptible threads. These entry methods may perform blocking operations, such as calls to a -\kw{sync} entry method, or explicitly suspend themselves. - -\index{sync}\item[sync] \index{entry method}entry methods are special in that calls to -sync entry methods are blocking - they do not return control to the caller -until the method is finished executing completely. Sync methods may have -return values; however, they may only return messages. Callers must run in a -thread separate from the runtime scheduler, e.g. a \kw{threaded} entry methods. -Calls expecting a return value will receive it as the return from the proxy invocation: +\kw{sync} entry method, or explicitly suspending themselves. For more +details, refer to section~\ref{threaded}. + +\index{sync}\item[sync] \index{entry method}entry methods are special in that +calls to them are blocking--they do not return control to the caller until the +method finishes execution completely. Sync methods may have return values; +however, they may only return messages. Callers must run in a thread separate +from the runtime scheduler, e.g. a \kw{threaded} entry methods. Calls +expecting a return value will receive it as the return from the proxy +invocation: \begin{alltt} -ReturnMsg* m; -m = A[i].foo(a, b, c); + ReturnMsg* m; + m = A[i].foo(a, b, c); \end{alltt} - -\index{exclusive}\item[exclusive] entry methods, which exist only on node groups, are -\index{entry method}entry methods that do not execute while other exclusive -\index{entry method}entry methods of its node group are executing in the same -node. If one exclusive method of a node group is executing on node 0, and -another one is scheduled to run on that same node, the second exclusive method -will wait for the first to finish before it executes. - -\index{nokeep}\item[nokeep] entry methods tells Charm++ that messages passed to -these user entry methods will not be kept by the calls. Charm++ runtime -may be able to adopt optimization for reusing the message memory. - -\index{notrace}\item[notrace] entry methods simply tells Charm++ that calls to -these entry methods should be not traced in trace projections or summary mode. - -\index{immediate}\item[immediate] entry methods are entry functions in which -short messages can be executed in an immediate'' fashion when they are -received either by an interrupt (Network version) or by a communication thread -(in SMP version). Such messages can be useful for implementing -multicasts/reductions as well as data lookup, in which case processing of -critical messages won't be delayed (in the scheduler queue) by entry functions -that could take long time to finish. Immediate messages are only available for -nodegroup entry methods. Immediate messages are implicitly exclusive'' on each -node, that is one execution of immediate message will not be interrupted by -another. Function \kw{CmiProbeImmediateMsg()} can be called in users code to -probe and process immediate messages periodically. - -\index{expedited}\item[expedited] entry methods are entry functions -that skip Charm++'s priority-based message queue. It is useful for messages that -require prompt processing however in the situation when immediate message does -not apply. Compared with immediate message, it provides a more general solution -that works for all Charm++ objects, i.e. Chare, Group, NodeGroup and Chare -Array. However, expedited message still needs to be scheduled in low level -Converse message queue and be processed in the order of arrival. It may still -suffer from long running entry methods. - -\index{inline}\item[inline] entry methods are entry functions in which the -message is delivered immediately to the recipient if it happens to reside on the -same processor. Therefore, these entry methods need to be reentrant, as they -could be called multiple times recursively. If the recipient reside on another -processor, a regular message is sent, and \kw{inline} has no effect. - -\index{local}\item[local] entry methods are equivalent to function calls: the -entry method is always called immediately. This feature is available only for -Groups and Chare Arrays. The user has to guarantee that the recipient chare -element reside on the same processor, a failure will result in the application -to abort. In contrast will all other entry methods where input parameters are -marshalled into a message, \kw{local} entry methods pass them direcly to the -callee. This implies that the callee can modify the caller data if this is -passed by pointer or reference. Furthermore, the input parameters do not require -to be PUPable. Being these entry methods always called immediately, they are -allowed to have a non-void return type. Nevertheless, the returned type must be -a pointer. - -\index{python}\item[python] entry methods are methods which are enabled to be -called from python scripts running as explained in section~\ref{python}. In -order to work, the object owning the method must also be declared with the -keyword \kw{python}. +For more details, refer to section~\ref{sync}. + +\index{exclusive}\item[exclusive] \index{entry method} entry methods should +only exist on NodeGroup objects. One such entry method will not execute while +some other exclusive entry methods belonging to the same NodeGroup object are +executing on the same node. In other words, if one exclusive method of a +NodeGroup object is executing on node N, and another one is scheduled to run on +the same node, the second exclusive method will wait to execute until the first +one finishes. An example can be found in \testrefdir{pingpong}. + +\index{nokeep}\item[nokeep] entry methods only take a message as the argument, +and the memory buffer for this message will be managed by the \charmpp{} +runtime rather than the user calls. This means that user has to guarantee that +the message should not be buffered for a later usage or be freed in the user +codes. Otherwise, a runtime error will be caused. +Such entry methods entail runtime +optimizations such as reusing the message memory. An example can be found in +\examplerefdir{histogram\_group}. +%these user entry methods will not be kept by the calls. Charm++ runtime +%may be able to adopt optimization for reusing the message memory. + +\index{notrace}\item[notrace] entry methods will not be traced during execution. As a result, they will not be considered and displayed in Projections for +performance analysis. + +\index{immediate}\item[immediate] entry methods are executed in an +immediate'' fashion as they skip the message scheduling while other normal +entry methods donot. Immediate entry methods should be only associcated with +NodeGroup objects although it is not checked during compilation. If the +destination of such entry method is on the local node, then the method will be +executed in the context of the regular PE regardless the execution mode of +\charmpp{} runtime. However, in the SMP mode, if the destination of the method +is on the remote node, then the method will be executed in the context of the +communication thread. +%are entry functions in which +%short messages can be executed in an immediate'' fashion when they are +%received either by an interrupt (Network version) or by a communication thread +%(in SMP version). +Such entry methods can be useful for implementing multicasts/reductions as well +as data lookup when such operations are on the performance critical path. On a +certain \charmpp{} PE, skipping the normal message scheduling prevents the +execution of immediate entry methods from being delayed by entry functions that +could take a long time to finish. Immediate entry methods are implicitly +exclusive'' on each node, meaning that one execution of immediate message +will not be interrupted by another. Function \kw{CmiProbeImmediateMsg()} can be +called in user codes to probe and process immediate messages periodically. An +example immediatering'' can be found in \testrefdir{megatest}. + +\index{expedited}\item[expedited] entry methods skip the priority-based message +queue in \charmpp{} runtime. It is useful for messages that require prompt +processing when adding the immediate attribute to the message does not apply. +Compared with the immediate attribute, the expedited attribute provides a more +general solution that works for all types of \charmpp{} objects, i.e. Chare, +Group, NodeGroup and Chare Array. However, expedited entry methods will still +be scheduled in the lower-level Converse message queue, and be processed in the +order of message arrival. Therefore, they may still suffer from delays caused +by long running entry methods. An example can be found in +\examplerefdir{satisfiability}. + +\index{inline}\item[inline] entry methods will be immediately invoked if the +message recipient happens to be on the same PE. These entry methods need to be +re-entrant as they could be called multiple times recursively. If the recipient +resides on a non-local PE, a regular message is sent, and \kw{inline} has no +effect. An example inlineem'' can be found in \testrefdir{megatest}. + +\index{local}\item[local] entry methods are equivalent to normal function +calls: the entry method is always executed immediately. This feature is +available only for Group objects and Chare Array objects. The user has to +guarantee that the recipient chare element reside on the same PE. Otherwise, +the application will abort on a failure. If the \kw{local} entry method uses +parameter marshalling, instead of marshalling input parameters into a message, +it will pass them direcly to the callee. This implies that the callee can +modify the caller data if method parameters are passed by pointer or reference. +Furthermore, input parameters do not require to be PUPable. Considering that +these entry methods always execute immediately, they are allowed to have a +non-void return value. Nevertheless, the return type of the method must be a +pointer. An example can be found in \examplerefdir{hello/local}. + +\index{python}\item[python] entry methods are enabled to be +called from python scripts as explained in chapter~\ref{python}. Note that the object owning the method must also be declared with the +keyword \kw{python}. Refer to chapter~\ref{python} for more details. \index{reductiontarget}\item[reductiontarget] entry methods may be used as the target of reductions, despite not taking CkReductionMsg as an argument. -See~\ref{typed_reductions}. +See section~\ref{reductions} for more references. \end{description} diff --git a/doc/charm++/further.tex b/doc/charm++/further.tex deleted file mode 100644 (file) index 7f720d5..0000000 +++ /dev/null @@ -1,60 +0,0 @@ -\section{Further Information} - -\subsection{Related Publications} - -\label{publications} - -For starters, see the publications, reports, and manuals -on the Parallel Programming Laboratory website: \texttt{http://charm.cs.uiuc.edu/}. - -\subsection{Associated Tools and Libraries} - -Several tools and libraries are provided for \charmpp{}. \projections{} -is an automatic performance analysis tool which provides -the user with information about the parallel behavior of \charmpp\ programs. -The purpose of implementing \charmpp{} standard -libraries is to reduce the time needed to develop parallel -applications with the help of a set of efficient and re-usable modules. -Most of the libraries have been described in a separate manual. - -\subsubsection{\projections} - -\projections{} is a performance visualization and feedback tool. The system has -a much more refined understanding of user computation than is possible in -traditional tools. - -\projections{} displays information about the request for creation and the -actual creation of tasks in \charmpp\ programs. Projections also provides the -function of post-mortem clock synchronization. Additionally, it can also -automatically partition the execution of the running program into logically -separate units, and automatically analyzes each individual partition. - -Future versions will be able to provide recommendations/suggestions for -improving performance as well. - -\subsection{Contacts} - -\label{Distribution} - -While we can promise neither bug-free software nor immediate solutions -to all problems, \charmpp\ is a stable system and it is our intention to -keep it as up-to-date and usable as our resources will allow -by responding quickly to questions and bug reports. To that -end, there are mechanisms in place for contacting Charm users -and developers. - -Our software is made available for research use and evaluation. -For the latest software distribution, further information about -\converse{}/\charmpp\ and information on how to contact the Parallel -Programming laboratory, see our website at \texttt{http://charm.cs.uiuc.edu/}. - -If retrieval of a publication via these channels is not possible, -please send electronic mail to \texttt{kale@cs.uiuc.edu} or postal mail to: - -\begin{alltt} - Laxmikant Kale - Department of Computer Science - University of Illinois - 201 N. Goodwin Ave. - Urbana, IL 61801 -\end{alltt} index 0e131a81d03fb4addace5ea211ee1307199f8a9a..7ecee1828c389548e2bede33887ea757e08c6b81 100644 (file) @@ -1,12 +1,21 @@ -\subsection{Futures} - +\section{Futures} \label{futures} -Similar to Multilisp and other functional programming languages, \charmpp\ provides the abstraction of {\em futures}. In simple terms, a {\em future} is a contract with the runtime system to evaluate an expression asynchronously with the calling program. This mechanism promotes the evaluation of expressions in parallel as several threads concurrently evaluate the futures created by a program. +Similar to Multilisp and other functional programming languages, \charmpp\ +provides the abstraction of {\em futures}. In simple terms, a {\em future} is a +contract with the runtime system to evaluate an expression asynchronously with +the calling program. This mechanism promotes the evaluation of expressions in +parallel as several threads concurrently evaluate the futures created by a +program. -In some ways, a future resembles lazy evaluation. Each future is assigned to a particular thread (or to a chare, in \charmpp\ ) and its value will be eventually delivered to the calling program. Once the future is created, a reference is returned immediately. If the value is needed, however, the calling program blocks until the value is available. +In some ways, a future resembles lazy evaluation. Each future is assigned to a +particular thread (or to a chare, in \charmpp) and, eventually, its value is +delivered to the calling program. Once a future is created, a {\em +reference} is returned immediately. However, if the {\em value} calculated by the future +is needed, the calling program blocks until the value is available. -\charmpp\ provides all the necessary infrastructure to use futures by means of the following functions: +\charmpp\ provides all the necessary infrastructure to use futures by means of +the following functions: \begin{alltt} CkFuture CkCreateFuture(void) @@ -16,30 +25,31 @@ In some ways, a future resembles lazy evaluation. Each future is assigned to a p void CkSendToFuture(CkFuture fut, void *msg) \end{alltt} -To illustrate the use of all these functions, a Fibonacci example in \charmpp\ using futures in presented below: +To illustrate the use of all these functions, a Fibonacci example in \charmpp\ +using futures in presented below: \begin{alltt} chare fib \{ - entry fib(int amIroot, int n, CkFuture f); - entry [threaded] void run(int amIroot, int n, CkFuture f); + entry fib(bool amIroot, int n, CkFuture f); + entry [threaded] void run(bool amIroot, int n, CkFuture f); \}; \end{alltt} \begin{alltt} -void fib::run(int AmIRoot, int n, CkFuture f) \{ - if (n< THRESHOLD) - result =seqFib(n); +void fib::run(bool amIRoot, int n, CkFuture f) \{ + if (n < THRESHOLD) + result = seqFib(n); else \{ CkFuture f1 = CkCreateFuture(); CkFuture f2 = CkCreateFuture(); - CProxy_fib::ckNew(0,n-1, f1); - CProxy_fib::ckNew(0,n-2, f2); + CProxy_fib::ckNew(0, n-1, f1); + CProxy_fib::ckNew(0, n-2, f2); ValueMsg * m1 = (ValueMsg *) CkWaitFuture(f1); ValueMsg * m2 = (ValueMsg *) CkWaitFuture(f2); result = m1->value + m2->value; delete m1; delete m2; \} - if (AmIRoot) \{ + if (amIRoot) \{ CkPrintf("The requested Fibonacci number is : \%d\\n", result); CkExit(); \} else \{ @@ -50,9 +60,21 @@ void fib::run(int AmIRoot, int n, CkFuture f) \{ \} \end{alltt} -The constant {\em THRESHOLD} sets a limit value for computing the Fibonacci number with futures or just with the sequential procedure. Given value {\em n}, the program creates two futures using {\em CkCreateFuture}. Those futures are used to create two new chares that will carry on the computation. Next, the program blocks until the two values of the Fibonacci's recurrence have been evaluated. Function {\em CkWaitFuture} is used for that purpose. Finally, the program checks whether it is the root of the evaluation or not. The very first chare created with a future is going to be the root. If a chare is not a root, it must indicate its future has finished computing the value. {\em CkSendToFuture} is meant to return the value for the current future. +The constant {\em THRESHOLD} sets a limit value for computing the Fibonacci +number with futures or just with the sequential procedure. Given value {\em n}, +the program creates two futures using {\em CkCreateFuture}. Those futures are +used to create two new chares that will carry out the computation. Next, the +program blocks until the two component values of the recurrence have been +evaluated. Function {\em CkWaitFuture} is used for that purpose. Finally, the +program checks whether or not it is the root of the recursive evaluation. The very first +chare created with a future is the root. If a chare is not the root, +it must indicate that its future has finished computing the value. {\em +CkSendToFuture} is meant to return the value for the current future. -Other functions complete the API for futures. {\em CkReleaseFuture} destroys a future. {\em CkProbeFuture} test if the future has already finished computing the value of the expression. +Other functions complete the API for futures. {\em CkReleaseFuture} destroys a +future. {\em CkProbeFuture} tests whether the future has already finished computing +the value of the expression. -The \converse\ version of future functions can be found in the \converse\ \htmladdnormallink{manual}{http://charm.cs.illinois.edu/manuals/html/convext/manual.html}. +The \converse\ version of future functions can be found in the +\htmladdnormallink{\converse{} manual}{http://charm.cs.illinois.edu/manuals/html/convext/manual.html}. index 85e62cc2ba67ac6406338f729ae3293b595d5e30..baa6d0ede3d88a6f8f76b2dd9bac47ac006ef192 100644 (file) -\subsection{Group Objects} - +So far, we have discussed chares separately from the underlying +hardware resources to which they are mapped. However, for writing +lower-level libraries or optimizing application performance it is +sometimes useful to create chare collections where a single chare is +mapped per specified resource used in the run. The +\kw{group} \footnote{Originally called {\em Branch Office Chare} or + {\em Branched Chare}} and \kw{node group} constructs provide this +facility by creating a collection of chares with \index{chare}a single +chare (or {\sl branch}) on each PE (in the case of groups) or process +(for node groups). + +\section{Group Objects} \label{sec:group} -A \kw{group}\footnote{Originally called {\em Branch Office Chare} or -{\em Branched Chare}} \index{group}is a collection of chares where -there exists \index{chare}one chare (or {\sl branch}) on each -processor. Each branch has its own data members. Groups have -a definition syntax similar to normal chares, -and they have to inherit from the system defined class \kw{CBase}\_\uw{ClassName}. -\footnote{A deprecated older syntax allow them to inherit directly from the -system-defined class \kw{Group}}. +Groups have a definition syntax similar to normal chares, and they +have to inherit from the system-defined class +\kw{CBase}\_\uw{ClassName}, where \uw{ClassName} is the name of the +group's \CC{} class \footnote{Older, deprecated syntax allows groups +to inherit directly from the system-defined class \kw{Group}}. + +\subsection{Group Definition} -In the interface file, we declare +In the interface ({\tt .ci}) file, we declare \begin{alltt} - group GroupType \{ - // Interface specifications as for normal chares - \}; +group Foo \{ + // Interface specifications as for normal chares + + // For instance, the constructor ... + entry Foo(\uw{parameters1}); + + // ... and an entry method + entry void someEntryMethod(\uw{parameters2}); +\}; \end{alltt} -In the \texttt{.h} file, we define \uw{GroupType} as follows: +The definition of the {\tt Foo} class is given in the \texttt{.h} file, as follows: \begin{alltt} - class GroupType : public CBase\_GroupType \{ +class Foo : public CBase\_Foo \{ // Data and member functions as in C++ // Entry functions as for normal chares - \}; -\end{alltt} -A group is identified by a globally unique group identifier, whose type is -\kw{CkGroupID}. This identifier is common to all of the group's branches and -can be obtained from the variable \kw{thisgroup}, which is a public local -variable of the \kw{Group} superclass. For groups, \kw{thishandle} is the -handle of the particular branch in which the function is executing: it is a -normal chare handle. - -Groups can be used to implement data-parallel operations easily. In addition -to sending messages to a particular branch of a group, one can broadcast -messages to all branches of a group. There can be many instances corresponding -to a group type. Each instance has a different group handle, and its own set -of branches. + public: + Foo(\uw{parameters1}); + void someEntryMethod(\uw{parameters2}); +\}; +\end{alltt} -\subsubsection{Group Creation} +\subsection{Group Creation} +\label{sec:groups/creation} -Given a \texttt{.ci} file as follows: +Groups are created in a manner similar to chares and chare arrays, i.e. +through \kw{ckNew}. Given the declarations and definitions of group {\tt Foo} +from above, we can create a group in the following manner: \begin{alltt} -group G \{ - entry G(\uw{parameters1}); - entry void someEntry(\uw{parameters2}); -\}; +CkGroupID fooGroupID = CProxy_Foo::ckNew(\uw{parameters1}); \end{alltt} -and the following \texttt{.h} file: +In the above, \kw{ckNew} returns an object of type \kw{CkGroupID}, +which is the globally unique identifier of the corresponding instance +of the group. This identifier is common to all of the group's +branches and can be obtained in the group's methods by accessing the variable +\kw{thisgroup}, which is a public data member of the \kw{Group} +superclass. + +A group can also be identified through its proxy, which can be obtained in one of three ways: +(a) as the inherited {\tt thisProxy} data member of the class; (b) when creating a group, +you can obtain a proxy to it from a call to \kw{ckNew} +as shown below: \begin{alltt} -class G : public CBase\_G \{ - public: - G(\uw{parameters1}); - void someEntry(\uw{parameters2}); -\}; +CProxy fooProxy = CProxy_Foo::ckNew(\uw{parameters1}); \end{alltt} -we can create a \index{group}group in a manner similar to a regular -\index{chare}chare. +or (c) by using a group identifier to create a proxy, as shown below: \begin{alltt} -CProxy_G groupProxy = CProxy_G::ckNew(\uw{parameters1}); -or -CkGroupID groupId = CProxy_G::ckNew(\uw{parameters1}); -CProxy_G groupProxy(groupId); +// Assume that we have obtained fooGroupID' as the CkGroupID for the group + +// Obtain a proxy to the group from its group ID +CProxy_Foo anotherFooProxy = CProxy_Foo(fooGroupID); \end{alltt} It is possible to specify the dependence of group creations using -\uw{CkEntryOptions}, for example, creation of group B on each processor depends -on group A being created on that processor. +\uw{CkEntryOptions}. For example, in the following code, the creation of group +{\tt GroupB} on each PE depends on the creation of {\tt GroupA} on that PE. \begin{alltt} -// create group A -CkGroupID groupAId = CProxy_GroupA::ckNew(\uw{parameters1}); +// Create GroupA +CkGroupID groupAID = CProxy_GroupA::ckNew(\uw{parameters1}); -// create group B which depends on group A being created +// Create GroupB. However, for each PE, do this only +// after GroupA has been created on it + +// Specify the dependency through a CkEntryOptions' object CkEntryOptions opts; opts.setGroupDepID(groupAId); -CkGroupID groupBId = CProxy_GroupB::ckNew(\uw{parameters2}); + +// The last argument to ckNew' is the CkEntryOptions' object from above +CkGroupID groupBID = CProxy_GroupB::ckNew(\uw{parameters2}, opts); \end{alltt} -\subsubsection{Method Invocation on Groups} +%For groups, \kw{thishandle} is the +%handle of the particular branch in which the function is executing: it is a +%normal chare handle. + +%Groups can be used to implement data-parallel operations easily. In addition +%to sending messages to a particular branch of a group, one can broadcast +%messages to all branches of a group. +Note that there can be several instances of each group type. +In such a case, each instance has a unique group identifier, and its own set +of branches. -Before sending a message to a \index{group}group via an entry -method, we need to get a proxy of that group. +\subsection{Method Invocation on Groups} -A message may be sent to a particular \index{branch}branch of group using the -notation: +An asynchronous entry method can be invoked on a particular branch of a +group through a proxy of that group. If we have a group with a proxy +{\tt fooProxy} and we wish to invoke entry method {\tt someEntryMethod} on +that branch of the group which resides on PE {\tt somePE}, we would accomplish +this with the following syntax: \begin{alltt} - groupProxy[Processor].EntryMethod(\uw{parameters}); +fooProxy[somePE].someEntryMethod(\uw{parameters}); \end{alltt} -This sends the given parameters to the \index{branch}branch of -the group referred to by \uw{groupProxy} which is on processor number -\uw{Processor} at the entry method \uw{EntryMethod}, which must be a valid -entry method of that group type. This call is asynchronous and non-blocking; it -returns immediately after sending the message. - +%This sends the given parameters to the \index{branch}branch of +%the group referred to by \uw{groupProxy} which is on processor number +%\uw{Processor} at the entry method \uw{EntryMethod}, which must be a valid +%entry method of that group type. +This call is asynchronous and non-blocking; it returns immediately after sending the message. A message may be broadcast \index{broadcast} to all branches of a group -(i.e., to all processors) using the notation : +(i.e., to all PEs) using the notation : \begin{alltt} - groupProxy.EntryMethod(\uw{parameters}); +fooProxy.anotherEntryMethod(\uw{parameters}); \end{alltt} -This sends the given parameters to all branches of the group at -the entry method \uw{EntryMethod}, which must be a valid entry method of that -group type. This call is asynchronous and non-blocking; it returns immediately +This invokes entry method \uw{anotherEntryMethod} with the given \uw{parameters} on +all branches of the group. This call is also asynchronous and non-blocking, and it, too, returns immediately after sending the message. - -Sequential objects, chares and other groups can gain access to the local -(i.e., on their processor) group object using: +Recall that each PE hosts a branch of every instantiated group. +Sequential objects, chares and other groups can gain access to this {\em PE-local} +branch using \kw{ckLocalBranch()}: \begin{alltt} GroupType *g=groupProxy.ckLocalBranch(); @@ -129,23 +154,26 @@ referred to by the proxy \uw{groupProxy}. Once a proxy to the local branch of a group is obtained, that branch can be accessed as a regular \CC\ object. Its public methods can return values, and its public data is readily accessible. - -Thus a dynamically created \index{chare}chare can call a public method of a -group without needing to know which processor it actually resides: the method -executes in the local \index{branch}branch of the group. - -One very nice use of Groups is to reduce the number of messages sent between -processors by collecting the data from all the chares on a single processor -before sending that data to the mainchare. To do this, create basic chares to -break up the work of a problem. Also, create a group. When a particular chare +%Thus a dynamically created \index{chare}chare can invoke a public method of a +%group without knowing the PE on which it actually resides. +%the method +%executes in the local \index{branch}branch of the group. + +Let us end with an example use-case for groups. +%One very nice use of Groups is to reduce the number of messages sent between +%processors by collecting the data from all the chares on a single processor +Suppose that we have a task-parallel program in which we dynamically spawn +new chares. Furthermore, assume that each one of these chares has some data +to send to the mainchare. Instead of creating a separate message for each +chare's data, we create a group. When a particular chare finishes its work, it reports its findings to the local branch of the group. -When all the chares on one processor are complete, the local branch of the -group can then report to the main chare. This reduces the number of messages -sent to main from the number of chares created to the number of processors. - - - - - - +When all the chares on a PE have finished their work, the local branch +can send a single message to the main chare. This reduces the number of messages +sent to the mainchare from the number of chares created to the number of processors. + +For a more concrete example on how to use groups, please refer to +\examplerefdir{histogram\_group}. It presents a parallel +histogramming operation in which chare array elements funnel their bin counts +through a group, instead of contributing directly to a reduction across all +chares. diff --git a/doc/charm++/helloworld.tex b/doc/charm++/helloworld.tex new file mode 100644 (file) index 0000000..c53cf20 --- /dev/null @@ -0,0 +1,108 @@ +\zap{ +A simple \charmpp\ program is given below: + +\begin{alltt} +/////////////////////////////////////// +// File: pgm.ci + +mainmodule Hello \{ + readonly CProxy_HelloMain mainProxy; + mainchare HelloMain \{ + entry HelloMain(); // implicit CkArgMsg * as argument + entry void PrintDone(void); + \}; + group HelloGroup \{ + entry HelloGroup(void); + \}; +\}; + +//////////////////////////////////////// +// File: pgm.h +#include "Hello.decl.h" // Note: not pgm.decl.h + +class HelloMain : public CBase_HelloMain \{ + public: + HelloMain(CkArgMsg *); + void PrintDone(void); + private: + int count; +\}; + +class HelloGroup: public Group \{ + public: + HelloGroup(void); +\}; + +///////////////////////////////////////// +// File: pgm.C +#include "pgm.h" + +CProxy_HelloMain mainProxy; + +HelloMain::HelloMain(CkArgMsg *msg) \{ + delete msg; + count = 0; + mainProxy = thisProxy; + CProxy_HelloGroup::ckNew(); // Create a new "HelloGroup" +\} + +void HelloMain::PrintDone(void) \{ + count++; + if (count == CkNumPes()) \{ // Wait for all group members to finish the printf + CkExit(); + \} +\} + +HelloGroup::HelloGroup(void) \{ + ckout << "Hello World from processor " << CkMyPe() << endl; + mainProxy.PrintDone(); +\} + +#include "Hello.def.h" // Include the Charm++ object implementations + +///////////////////////////////////////// +// File: Makefile + +pgm: pgm.ci pgm.h pgm.C + charmc -c pgm.ci + charmc -c pgm.C + charmc -o pgm pgm.o -language charm++ + +\end{alltt} + +\uw{HelloMain} is designated a \kw{mainchare}. Thus the Charm RTS starts +execution of this program by creating an instance of \uw{HelloMain} on +processor 0. The HelloMain constructor creates a chare group +\uw{HelloGroup}, and stores a handle to itself and returns. The call to +create the group returns immediately after directing Charm RTS to perform +the actual creation and invocation. Shortly after, the Charm RTS will +create an object of type \uw{HelloGroup} on each processor, and call its +constructor. The constructor will then print Hello World...'' and then +call the \uw{PrintDone} method of \uw{HelloMain}. The \uw{PrintDone} method +calls \kw{CkExit} after all group members have called it (i.e., they have +finished printing Hello World...''), and the \charmpp program exits. + +\subsection{Functions in the decl.h'' and def.h'' files} + +The \texttt{decl.h} file provides declarations for the proxy classes of the +concurrent objects declared in the .ci'' file (from which the \texttt{decl.h} +file is generated). So the \uw{Hello.decl.h} file will have the declaration of +the class CProxy\_HelloMain. Similarly it will also have the declaration for +the HelloGroup class. + +This class will have functions to create new instances of the chares and +groups, like the function \kw{ckNew}. For \uw{HelloGroup} this function creates +an instance of the class \uw{HelloGroup} on all the processors. + +The proxy class also has functions corresponding to the entry methods defined +in the .ci'' file. In the above program the method wait is declared in +\uw{CProxy\_HelloMain} (proxy class for \uw{HelloMain}). + +The proxy class also provides static registration functions used by the +\charmpp{} runtime. The \texttt{def.h} file has a registration function +(\uw{\_\_registerHello} in the above program) which calls all the registration +functions corresponding to the readonly variables and entry methods declared in +the module. +} % end zap + + diff --git a/doc/charm++/hetero.tex b/doc/charm++/hetero.tex new file mode 100644 (file) index 0000000..f16f5ab --- /dev/null @@ -0,0 +1,9 @@ +\section{Annotating Entry Methods for Acceleration} +\subsection{\kw{accel}} +\subsection{\kw{triggered}} +\subsection{\kw{splittable}} + +\section{AccelManager} + +\section{Balancing Load Across Devices} + diff --git a/doc/charm++/history.tex b/doc/charm++/history.tex new file mode 100644 (file) index 0000000..8e1158d --- /dev/null @@ -0,0 +1,39 @@ +The {\sc Charm} software was developed as a group effort of the Parallel +Programming Laboratory at the University of Illinois at Urbana-Champaign. +Researchers at the Parallel Programming Laboratory keep \charmpp\ updated for +the new machines, new programming paradigms, and for supporting and simplifying +development of emerging applications for parallel processing. The earliest +prototype, Chare Kernel(1.0), was developed in the late eighties. It consisted +only of basic remote method invocation constructs available as a library. The +second prototype, Chare Kernel(2.0), a complete re-write with major design +changes. This included C language extensions to denote Chares, messages and +asynchronous remote method invocation. {\sc Charm}(3.0) improved on this +syntax, and contained important features such as information sharing +abstractions, and chare groups (called Branch Office Chares). {\sc Charm}(4.0) +included \charmpp\ and was released in fall 1993. \charmpp\ in its initial +version consisted of syntactic changes to \CC\ and employed a special +translator that parsed the entire \CC\ code while translating the syntactic +extensions. {\sc Charm}(4.5) had a major change that resulted from a +significant shift in the research agenda of the Parallel Programming +Laboratory. The message-driven runtime system code of the \charmpp\ was +separated from the actual language implementation, resulting in an +interoperable parallel runtime system called {\sc +Converse}. The \charmpp\ runtime system was +retargetted on top of {\sc Converse}, and popular programming paradigms such as +MPI and PVM were also implemented on {\sc Converse}. This allowed +interoperability between these paradigms and \charmpp. This release also +eliminated the full-fledged \charmpp\ translator by replacing syntactic +extensions to \CC\ with \CC\ macros, and instead contained a small language and +a translator for describing the interfaces of \charmpp\ entities to the runtime +system. This version of \charmpp, which, in earlier releases was known as {\em +Interface Translator \charmpp}, is the default version of \charmpp\ now, and +hence referred simply as {\bf \charmpp}. In early 1999, the runtime system of +\charmpp\ +%was formally named the Charm Kernel, and +was rewritten in \CC. +Several new features were added. The interface language underwent significant +changes, and the macros that replaced the syntactic extensions in original +\charmpp, were replaced by natural \CC\ constructs. Late 1999, and early +2000 reflected several additions to \charmpp{}, when a load balancing +framework and migratable objects were added to \charmpp{}. + index d74cd1d27582178e7d56d80f3f1262eb6bf04f60..385b7df7344da0220a6b36976f8f3b5ea0af8f72 100644 (file) @@ -1,13 +1,9 @@ -\section{Inheritance and Templates in Charm++} - -\label{inheritance and templates} - \charmpp\ supports inheritance among \charmpp\ objects such as chares, groups, and messages. This, along with facilities for generic programming using \CC\ style templates for \charmpp\ objects, is a major enhancement over the previous versions of \charmpp. -\subsection{Chare Inheritance} +\section{Chare Inheritance} \index{inheritance} @@ -90,7 +86,7 @@ Pure virtual entry methods also require no special description in the interface file. -\subsection{Inheritance for Messages} +\section{Inheritance for Messages} \index{message inheritance} @@ -128,7 +124,7 @@ methods, and virtual base classes via the PUP::able framework. %method expecting a base class message. -\subsection{Generic Programming Using Templates} +\section{Generic Programming Using Templates} \index{templates} diff --git a/doc/charm++/install.tex b/doc/charm++/install.tex new file mode 100644 (file) index 0000000..65de697 --- /dev/null @@ -0,0 +1,197 @@ +\charmpp{} can be installed either from the source code or using a precompiled +binary package. Building from the source code provides more flexibility, since one +can choose the options as desired. However, a precompiled binary may be slightly +easier to get running. +\section{Downloading \charmpp{}} + +\charmpp{} can be downloaded using one of the following methods: + +\begin{itemize} +\item From \charmpp{} website -- The current stable version (source code and +binaries) can be downloaded from our website at {\em http://charm.cs.illinois.edu/software}. +\item From source archive -- The latest development version of \charmpp{} can be downloaded +from our source archive using {\em git clone git://charm.cs.illinois.edu/charm.git}. +\end{itemize} + +If you download the source code from the website, you will have to unpack it +using a tool capable of extracting gzip'd tar files, such as tar (on Unix) +or WinZIP (under Windows). \charmpp{} will be extracted to a directory +called charm''. + +\section{Installation} + +A typical prototype command for building \charmpp{} from the source code is: +\vspace{5pt}\\ +{\bf ./build$<$TARGET$><$TARGET ARCHITECTURE$>$[OPTIONS]} where, + +\begin{description} +\item [TARGET] is the framework one wants to build such as {\em charm++} or {\em +AMPI}. +\item [TARGET ARCHITECTURE] is the machine architecture one wants to build for +such as {\em net-linux-x86\_64}, {\em bluegenep} etc. +\item [OPTIONS] are additional options to the build process, e.g. {\em smp} is +used to build a shared memory version, {\em -j8} is given to build in parallel +etc. +\end {description} + +In Table~\ref{tab:buildlist}, a list of build commands is provided for some of the commonly +used systems. Note that, in general, options such as {\em smp}, {\em +--with-production}, compiler specifiers etc can be used with all targets. It is +advisable to build with {\em --with-production} to obtain the best performance. +If one desires to perform trace collection (for Projections), {\em +--enable-tracing --enable-tracing-commthread} should also be passed to the build command. + +Details on all the available alternatives for each of the above mentioned +parameters can be found by invoking {\em ./build --help}. One can also go through the +build process in an interactive manner. Run {\em ./build}, and it will be followed by +a few queries to select appropiate choices for the build one wants to perform. + + +\begin{table}[ht] +\begin{tabular}{|p{6cm}|p{9cm}|} +\hline +Net with 32 bit Linux & ./build charm++ net-linux --with-production -j8 +\\\hline +Multicore 64 bit Linux & ./build charm++ multicore-linux64 --with-production -j8 +\\\hline +Net with 64 bit Linux & ./build charm++ net-linux-x86\_64 --with-production -j8 +\\\hline +Net with 64 bit Linux (intel compilers) & ./build charm++ net-linux-x86\_64 icc --with-production -j8 +\\\hline +Net with 64 bit Linux (shared memory) & ./build charm++ net-linux-x86\_64 smp --with-production -j8 +\\\hline +Net with 64 bit Linux (checkpoint restart based fault tolerance) & ./build charm++ net-linux-x86\_64 syncft --with-production -j8 +\\\hline +MPI with 64 bit Linux & ./build charm++ mpi-linux-x86\_64 --with-production -j8 +\\\hline +MPI with 64 bit Linux (shared memory) & ./build charm++ mpi-linux-x86\_64 smp --with-production -j8 +\\\hline +MPI with 64 bit Linux (mpicxx wrappers) & ./build charm++ mpi-linux-x86\_64 mpicxx --with-production -j8 +\\\hline +IBVERBS with 64 bit Linux & ./build charm++ net-linux-x86\_64 ibverbs --with-production -j8 +\\\hline +Net with 32 bit Windows & ./build charm++ net-win32 --with-production -j8 +\\\hline +Net with 64 bit Windows & ./build charm++ net-win64 --with-production -j8 +\\\hline +MPI with 64 bit Windows & ./build charm++ mpi-win64 --with-production -j8 +\\\hline +Net with 64 bit Mac & ./build charm++ net-darwin-x86\_64 --with-production -j8 +\\\hline +Blue Gene/L & ./build charm++ bluegenel xlc --with-production -j8 +\\\hline +Blue Gene/P & ./build charm++ bluegenep xlc --with-production -j8 +\\\hline +Blue Gene/Q & ./build charm++ pami-bluegeneq xlc --with-production -j8 +\\\hline +Cray XT3 & ./build charm++ mpi-crayxt3 --with-production -j8 +\\\hline +Cray XT5 & ./build charm++ mpi-crayxt --with-production -j8 +\\\hline +Cray XE6 & ./build charm++ gemini\_gni-crayxe --with-production -j8 +\\\hline +\end{tabular} +\caption{Build command for some common cases} +\label{tab:buildlist} +\end{table} + +As mentioned earlier, one can also build \charmpp{} using the precompiled binary +in a manner similar to what is used for installing any common software. + + +The main directories in a \charmpp{} installation are: + +\begin{description} +\item[\kw{charm/bin}] +Executables, such as charmc and charmrun, +used by \charmpp{}. + +\item[\kw{charm/doc}] +Documentation for \charmpp{}, such as this +document. Distributed as LaTeX source code; HTML and PDF versions +can be built or downloaded from our web site. + +\item[\kw{charm/include}] +The \charmpp{} C++ and Fortran user include files (.h). + +\item[\kw{charm/lib}] +The libraries (.a) that comprise \charmpp{}. + +\item[\kw{charm/pgms}] +Example \charmpp{} programs. + +\item[\kw{charm/src}] +Source code for \charmpp{} itself. + +\item[\kw{charm/tmp}] +Directory where \charmpp{} is built. + +\item[\kw{charm/tools}] +Visualization tools for \charmpp{} programs. + +\item[\kw{charm/tests}] +Test \charmpp{} programs used by autobuild. + +\end{description} + +\section{Security Issues} + +On most computers, \charmpp{} programs are simple binaries, and they pose +no more security issues than any other program would. The only exception +is the network version {\tt net-*}, which has the following issues. + +The network versions utilize many unix processes communicating with +each other via UDP. Only a simple attempt is currently made to filter out +unauthorized packets. Therefore, it is theoretically possible to +mount a security attack by sending UDP packets to an executing +\converse{} or \charmpp{} program's sockets. + +The second security issue associated with networked programs is +associated with the fact that we, the \charmpp{} developers, need evidence +that our tools are being used. (Such evidence is useful in convincing +funding agencies to continue to support our work.) To this end, we +have inserted code in the network {\tt charmrun} program (described +later) to notify us that our software is being used. +This notification is a single {\tt UDP} packet sent by {\tt charmrun} +to {\tt charm.cs.illinois.edu}. This data is put +to one use only: it is gathered into tables recording the internet +domains in which our software is being used, the number of individuals +at each internet domain, and the frequency with which it is used. + +We recognize that some users may have objections to our notification +code. Therefore, we have provided a second copy of the {\tt +charmrun} program with the notification code removed. If you look +within the {\tt bin} directory, you will find these programs: + +\begin{alltt} + \$ cd charm/bin
+    \$ls charmrun* + charmrun + charmrun-notify + charmrun-silent +\end{alltt} + +The program {\tt charmrun.silent} has the notification code removed. To +permanently deactivate notification, you may use the version without the +notification code: + +\begin{alltt} + \$ cd charm/bin
+    \$cp charmrun.silent charmrun +\end{alltt} + +The only versions of \charmpp{} that ever notify us are +the network versions. + + +\section{Reducing disk usage} + +The charm directory contains a collection of example-programs and +test-programs. These may be deleted with no other effects. You may +also {\tt strip} all the binaries in {\tt charm/bin}. + + + + + index f6c5f343513ab86864aa9b02845ba7b22a8eef13..2653b711d4ca2a7c8eabd50c1afced3a88a17ac8 100644 (file) -\section{Introduction} - -\charmpp\ is an explicitly parallel language based on \CC\ with a runtime -library for supporting parallel computation called the Charm kernel. It -provides a clear separation between sequential and parallel objects. The -execution model of \charmpp\ is message driven, thus helping one write programs -that are latency-tolerant. \charmpp\ supports dynamic load balancing while -creating new work as well as periodically, based on object migration. Several -dynamic load balancing strategies are provided. \charmpp\ supports both -irregular as well as regular, data-parallel applications. It is built on top of the -{\sc Converse} interoperable runtime system for parallel programming. - -Currently the parallel platforms supported by \charmpp\ are the BlueGene/L,BlueGene/P, PSC -Lemieux, IBM SP, SGI Origin2000, Cray XT3/4, Cray X1, Cray T3E, a single workstation or a -network of workstations from Sun Microsystems (Solaris), IBM RS-6000 (AIX) SGI -(IRIX 5.3 or 6.4), HP (HP-UX), Intel x86 (Linux, Windows 98/2000/XP), Intel -IA64, Intel x86\_64, multicore x86 and x86\_64, and Apple Mac. The communication protocols and infrastructures supported -by \charmpp\ are UDP, TCP, Myrinet, Infiniband, Quadrics Elan, Shmem, MPI and -NCSA VMI. \charmpp\ programs can run without changing the source on all these -platforms. Please see the \charmpp{}/\converse{} Installation and -Usage \htmladdnormallink{Manual}{http://charm.cs.uiuc.edu/manuals/html/install/manual.html} -for details about installing, compiling and running \charmpp\ programs. - -\subsection{Overview} - -\charmpp\ is an object oriented parallel language. What sets \charmpp\ apart -from traditional programming models such as message passing and shared variable -programming is that the execution model of \charmpp\ is message-driven. -Therefore, computations in \charmpp\ are triggered based on arrival of -associated messages. These computations in turn can fire off more messages to -other (possibly remote) processors that trigger more computations on those -processors. - -At the heart of any \charmpp\ program is a scheduler that repetitively chooses -a message from the available pool of messages, and executes the computations -associated with that message. - -The programmer-visible entities in a \charmpp\ program are: - -\begin{itemize} -\item Concurrent Objects : called {\em chares}\footnote{ - Chare (pronounced {\bf ch\"ar}, \"a as in c{\bf a}rt) is Old - English for chore. - } -\item Communication Objects : Messages -\item Readonly data -\end{itemize} - -\charmpp\ starts a program by creating a single \index{chare} instance of each -{\em mainchare} on processor 0, and invokes constructor methods of these -chares. Typically, these chares then creates a number of other \index{chare} -chares, possibly on other processors, which can simultaneously work to solve -the problem at hand. - -Each \index{chare}chare contains a number of \index{entry method}{\em entry -methods}, which are methods that can be invoked from remote processors. The -\charmpp\ runtime system needs to be explicitly told about these methods, via -an {\em interface} in a separate file. The syntax of this interface -specification file is described in the later sections. - -\charmpp\ provides system calls to asynchronously create remote \index{chare} -chares and to asynchronously invoke entry methods on remote chares by sending -\index{message} messages to those chares. This asynchronous -\index{message}message passing is the basic interprocess communication -mechanism in \charmpp. However, \charmpp\ also permits wide variations on this -mechanism to make it easy for the programmer to write programs that adapt to -the dynamic runtime environment. These possible variations include -prioritization (associating priorities with method invocations), conditional -\index{message packing}message packing and unpacking (for reducing messaging -overhead), \index{quiescence}quiescence detection (for detecting completion of -some phase of the program), and dynamic load balancing (during remote object -creation). In addition, several libraries are built on top of \charmpp\ that -can simplify otherwise arduous parallel programming tasks. + +\charmpp\ is a C++-based parallel programming system, founded on the +migratable-objects programming model, and supported by a novel and +powerful adaptive runtime system. It supports both irregular as well +as regular applications, and can be used to specify task-parallelism +as well as data parallelism in a single application. It automates +dynamic load balancing for task-parallel as well as data-parallel +applications, via separate suites of load-balancing strategies. Via +its message-driven execution model, it supports automatic latency +tolerance, modularity and parallel composition. Charm++ also supports +automatic checkpoint/restart, as well as fault tolerance based on +distributed checkpoints. +% {\sc Converse} interoperable runtime system for parallel +% programming. + +Charm++ is a production-quality parallel programming system used by +multiple applications in science and engineering on supercomputers as +well as smaller clusters around the world. Currently the parallel +platforms supported by \charmpp\ are the BlueGene/L, BlueGene/P, +BlueGene/Q, Cray XT, XE and XK series (including XK6 and XE6), +% XT3/4, Cray X1, Cray T3E, +a single workstation or a network of workstations (including x86 +(running Linux, Windows, MacOS)), etc. The communication protocols +and infrastructures supported by +\charmpp\ are UDP, TCP, Myrinet, Infiniband, MPI, uGNI, and PAMI. +\charmpp\ programs can run without changing the source +on all these platforms. If built on MPI, \charmpp{} programs can +also interoperate with pure MPI programs (\S\ref{sec:interop}). + Please see the \charmpp{}/\converse{} +Installation and Usage +\htmladdnormallink{Manual}{http://charm.cs.illinois.edu/manuals/html/install/manual.html} +for details about installing, compiling and running +\charmpp\ programs. + + +\section{Programming Model} +The key feature of the migratable-objects programming model is {\em +over-decomposition}: The programmer decomposes the program into a +large number of work units and data units, and specifies the +computation in terms of creation of and interactions between these +units, without any direct reference to the processor on which any unit +resides. This empowers the runtime system to assign units to +processors, and to change the assignment at runtime as +necessary. Charm++ is the main (and early) exemplar of this +programming model. AMPI is another example within the Charm++ family +of the same model. + + +\section{Execution Model} + +% A \charmpp\ program consists of a number of \charmpp\ objects +% distributed across the available number of processors. Thus, +A basic +unit of parallel computation in \charmpp\ programs is a {\em +chare}\index{chare}. +% a \charmpp\ object that can be created on any +% available processor and can be accessed from remote processors. +A \index{chare}chare is similar to a process, an actor, an ADA task, +etc. At its most basic level, it is just a C++ object. +% with some of its methods +% that can be invoked from remote objects. +A \charmpp computation consists of a large number of chares +distributed on available processors of the system, and interacting +with each other via asynchronous method invocations. +Asynchronously invoking a method on a remote object can also be +thought of as +sending a message'' to it. So, these method invocations are +sometimes referred to as messages. (besides, in the implementation, +the method invocations are packaged as messages anyway). +\index{chare}Chares can be +created dynamically. +% , and many chares may be active simultaneously. +% Chares send \index{message}{\em messages} to one another to invoke +% methods asynchronously. + +Conceptually, the system maintains a +work-pool'' consisting of seeds for new \index{chare}chares, and +\index{message}messages for existing chares. The Charm++ runtime system ({\em +Charm RTS}) may pick multiple items, non-deterministically, from this +pool and execute them, with the proviso that two different methods +cannot be simultaneously executing on the same chare object (say, on +different processors). Although one can define a reasonable +theoretical operational semantics of Charm++ in this fashion, a more +practical description of execution is useful to understand Charm++: On +each PE (PE'' stands for a Processing Element''. PEs are akin to +processor cores; see section \ref{sec:machine} for a precise +description), there is a scheduler operating with its own private pool +of messages. Each instantiated chare has one PE which is where it +currently resides. The pool on each PE includes messages meant for +Chares residing on that PE, and seeds for new Chares that are +tentatively meant to be instantiated on that PE. The scheduler picks a +message, creates a new chare if the message is a seed (i.e. a +constructor invocation) for a new Chare, and invokes the method +specified by the message. When the method returns control back to the +scheduler, it repeats the cycle. I.e. there is no pre-emptive +scheduling of other invocations. + +When a chare method executes, it may create method invocations for other +chares. The Charm Runtime System (RTS, sometimes referred to as the +Chare Kernel in the manual) locates the PE where the targeted chare +resides, and delivers the invocation to the scheduler on that PE. + +Methods of a \index{chare}chare that can be remotely invoked are called +\index{entry method}{\em entry} methods. Entry methods may take serializable +parameters, or a pointer to a message object. Since \index{chare} +chares can be created on remote processors, obviously some constructor +of a chare needs to be an entry method. Ordinary entry +methods\footnote{Threaded'' or synchronous'' methods are +different. But even they do not lead to pre-emption; only to +cooperative multi-threading} are completely non-preemptive-- +\charmpp\ will not interrupt an executing method to start any other work, +and all calls made are asynchronous. + +\charmpp\ provides dynamic seed-based load balancing. Thus location (processor +number) need not be specified while creating a +remote \index{chare}chare. The Charm RTS will then place the remote +chare on a suitable processor. Thus one can imagine chare creation +as generating only a seed for the new chare, which may {\em take root} +on some specific processor at a later time. + +% Charm RTS identifies a \index{chare}chare by a {\em ChareID}. + +% Since user code does not +% need to name a chares' processor, chares can potentially migrate from +% one processor to another. (This behavior is used by the dynamic +% load-balancing framework for chare containers, such as arrays.) + +Chares can be grouped into collections. The types of collections of +chares supported in Charm++ are: {\em chare-arrays}, \index{group}{\em +chare-groups}, and \index{nodegroup}{\em chare-nodegroups}, referred +to as {\em arrays}, {\em groups}, and {\em nodegroups} throughout this +manual for brevity. A Chare-array is a collection of an arbitrary number +of migratable chares, indexed by some index type, and mapped to +processors according to a user-defined map group. A group (nodegroup) +is a collection of chares, with exactly one member element on each PE +(node''). + +Charm++ does not allow global variables, except readonly variables +(see \ref{readonly}). A chare can normally only access its own data directly. +However, each chare is accessible by a globally valid name. So, one +can think of Charm++ as supporting a {\em global object space}. + + + +Every \charmpp\ program must have at least one \kw{mainchare}. Each +\kw{mainchare} is created by the system on processor 0 when the \charmpp\ +program starts up. Execution of a \charmpp\ program begins with the +Charm Kernel constructing all the designated \kw{mainchare}s. For +a \kw{mainchare} named X, execution starts at constructor X() or +X(CkArgMsg *) which are equivalent. Typically, the +\kw{mainchare} constructor starts the computation by creating arrays, other +chares, and groups. It can also be used to initialize shared \kw{readonly} +objects. + +\charmpp\ program execution is terminated by the \kw{CkExit} call. Like the +\kw{exit} system call, \kw{CkExit} never returns. The Charm RTS ensures +that no more messages are processed and no entry methods are called after a +\kw{CkExit}. \kw{CkExit} need not be called on all processors; it is enough +to call it from just one processor at the end of the computation. + +\zap{ +The only method of communication between processors in \charmpp\ is +asynchronous \index{entry method} entry method invocation on remote chares. +For this purpose, Charm RTS needs to know the types of +\index{chare}chares in the user program, the methods that can be invoked on +these chares from remote processors, the arguments these methods take as +input etc. Therefore, when the program starts up, these user-defined +entities need to be registered with Charm RTS, which assigns a unique +identifier to each of them. While invoking a method on a remote object, +these identifiers need to be specified to Charm RTS. Registration of +user-defined entities, and maintaining these identifiers can be cumbersome. +Fortunately, it is done automatically by the \charmpp\ interface translator. +The \charmpp\ interface translator generates definitions for {\em proxy} +objects. A proxy object acts as a {\em handle} to a remote chare. One +invokes methods on a proxy object, which in turn carries out remote method +invocation on the chare.} + +As described so far, the execution of individual Chares is +reactive'': When method A is invoked the chare executes this code, +and so on. But very often, chares have specific life-cycles, and the +sequence of entry methods they execute can be specified in a +structured manner, while allowing for some localized non-determinism +(e.g. a pair of methods may execute in any order, but when they both +finish, the execution continues in a pre-determined manner, say +executing a 3rd entry method). To simplify expression of such control +structures, Charm++ provides two methods: the structured dagger +notation (Sec \ref{sec:sdag}), which is the main notation we recommend +you use. Alternatively, you may use threaded entry methods, in +combination with {\em futures} and {\em sync} methods +(See \ref{threaded}). The threaded methods run in light-weight +user-level threads, and can block waiting for data in a variety of +ways. Again, only the particular thread of a particular chare is +blocked, while the PE continues executing other chares. + +The normal entry methods, being asynchronus, are not allowed to return +any value, and are declared with a void return type. However, the {\em +sync} methods are an exception to this. They must be called from a +threaded method, and so are allowed to return (certain types of) +values. + +\section{Proxies and the charm interface file} +\label{proxies} + +To support asynchronous method invocation and global object space, the +RTS needs to be able to serialize (marshall'') the parameters, and +be able to generate global names'' for chares. For this purprpose, +programmers have to declare the chare classes and the signature of +their entry methods in a special \verb#.ci#'' file, called an +interface file. Other than the interface file, the rest of a Charm++ +program consists of just normal C++ code. The system generates several +classes based on the declarations in the interface file, including +Proxy'' classes for each chare class. +Those familiar with various component models (such as CORBA) in the +distributed computing world will recognize proxy'' to be a dummy, standin +entity that refers to an actual entity. For each chare type, a proxy'' +class exists. +% \footnote{The proxy class is generated by the interface +% translator'' based on a description of the entry methods} +The methods of +this proxy'' class correspond to the remote methods of the actual class, and +act as forwarders''. That is, when one invokes a method on a proxy to a +remote object, the proxy marshalls the parameters into a message, puts +adequate information about the target chare on the envelope of the +message, and forwards it to the +remote object. +Individual chares, chare array, groups, node-groups, as well as the +individual elements of these collections have a such a +proxy. Multiple methods for obtaining such proxies are described in +the manual. +Proxies for each type of entity in \charmpp\ +have some differences among the features they support, but the basic +syntax and semantics remain the same -- that of invoking methods on +the remote object by invoking methods on proxies. + +% You can have several proxies that all refer to the same object. + +\zap{ +Historically, handles (which are basically globally unique +identifiers) were used to uniquely identify \charmpp\ objects. Unlike +pointers, they are valid on all processors and so could be sent as +parameters in messages. They are still available, but now proxies +also have the same feature. + +Handles (like CkChareID, CkArrayID, etc.) are still used internally, but should only be considered relevant to expert level usage.} +\zap{ +Proxies (like +CProxy\_foo) are just bytes and can be sent in messages, pup'd, and +parameter marshalled. This is now true of almost all objects in +Charm++: the only exceptions being entire Chares (Array Elements, +etc.) and, paradoxically, messages themselves. +} The following sections provide detailed information about various features of the -\charmpp\ programming system.\footnote{For a description of the underlying design +\charmpp\ programming system. Part I, Basic Usage'', is sufficient +for writing full-fledged applications. Note that only the last two +chapters of this part involve the notion of physical processors +(cores, nodes, ..), with the exception of simple query-type utilities +(Sec \ref{basic utility fns}). We strongly suggest that all +application developers, beginners and experts alike, try to stick to +the basic language to the extent possible, and use features from the +advanced sections only when you are convinced they are +essential. (They are are useful in specific situations; but a common +mistake we see when we examine programs written by beginners is the +inclusion of complex features that are not necessary for their +purpose. Hence the caution). The advanced concepts in the Part II of +the manual support optimizations, convenience features, and more +complex or sophisticated features. + + +\footnote{For a description of the underlying design philosophy please refer to the following papers :\\ L. V. Kale and Sanjeev Krishnan, {\em \charmpp : Parallel Programming with Message-Driven Objects''}, @@ -85,41 +279,4 @@ philosophy please refer to the following papers :\\ Proceedings of the Conference on Object Oriented Programming, Systems, Languages and Applications (OOPSLA), September 1993. }. -\subsection{History} - -The {\sc Charm} software was developed as a group effort of the Parallel -Programming Laboratory at the University of Illinois at Urbana-Champaign. -Researchers at the Parallel Programming Laboratory keep \charmpp\ updated for -the new machines, new programming paradigms, and for supporting and simplifying -development of emerging applications for parallel processing. The earliest -prototype, Chare Kernel(1.0), was developed in the late eighties. It consisted -only of basic remote method invocation constructs available as a library. The -second prototype, Chare Kernel(2.0), a complete re-write with major design -changes. This included C language extensions to denote Chares, messages and -asynchronous remote method invocation. {\sc Charm}(3.0) improved on this -syntax, and contained important features such as information sharing -abstractions, and chare groups (called Branch Office Chares). {\sc Charm}(4.0) -included \charmpp\ and was released in fall 1993. \charmpp\ in its initial -version consisted of syntactic changes to \CC\ and employed a special -translator that parsed the entire \CC\ code while translating the syntactic -extensions. {\sc Charm}(4.5) had a major change that resulted from a -significant shift in the research agenda of the Parallel Programming -Laboratory. The message-driven runtime system code of the \charmpp\ was -separated from the actual language implementation, resulting in an -interoperable parallel runtime system called {\sc -Converse}. The \charmpp\ runtime system was -retargetted on top of {\sc Converse}, and popular programming paradigms such as -MPI and PVM were also implemented on {\sc Converse}. This allowed -interoperability between these paradigms and \charmpp. This release also -eliminated the full-fledged \charmpp\ translator by replacing syntactic -extensions to \CC\ with \CC\ macros, and instead contained a small language and -a translator for describing the interfaces of \charmpp\ entities to the runtime -system. This version of \charmpp, which, in earlier releases was known as {\em -Interface Translator \charmpp}, is the default version of \charmpp\ now, and -hence referred simply as {\bf \charmpp}. In early 1999, the runtime system of -\charmpp\ was formally named the Charm Kernel, and was rewritten in \CC. -Several new features were added. The interface language underwent significant -changes, and the macros that replaced the syntactic extensions in original -\charmpp, were replaced by natural \CC\ constructs. Late 1999, and early -2000 reflected several additions to \charmpp{}, when a load balancing -framework and migratable objects were added to \charmpp{}. + diff --git a/doc/charm++/io.tex b/doc/charm++/io.tex deleted file mode 100644 (file) index 727d9c8..0000000 +++ /dev/null @@ -1,50 +0,0 @@ -\subsection{Terminal I/O} - -\index{input/output} -\charmpp\ provides both C and \CC\ style methods of doing terminal I/O. - -In place of C-style printf and scanf, \charmpp\ provides -\kw{CkPrintf} and \kw{CkScanf}. These functions have -interfaces that are identical to their C counterparts, but there are some -differences in their behavior that should be mentioned. - -A recent change to \charmpp\ is to also support all forms of printf, -cout, etc. in addition to the special forms shown below. The special -forms below are still useful, however, since they obey well-defined -(but still lax) ordering requirements. - -\function{int CkPrintf(format [, arg]*)} \index{CkPrintf} \index{input/output} -\desc{This call is used for atomic terminal output. Its usage is similar to -\texttt{printf} in C. However, \kw{CkPrintf} has some special properties -that make it more suited for parallel programming on networks of -workstations. \kw{CkPrintf} routes all terminal output to the \kw{charmrun}, -which is running on the host computer. So, if a -\index{chare}chare on processor 3 makes a call to \kw{CkPrintf}, that call -puts the output in a TCP message and sends it to host -computer where it will be displayed. This message passing is an asynchronous -send, meaning that the call to \kw{CkPrintf} returns immediately after the -message has been sent, and most likely before the message has actually -been received, processed, and displayed. \footnote{Because of -communication latencies, the following scenario is actually possible: -Chare 1 does a \kw{CkPrintf} from processor 1, then creates chare 2 on -processor 2. After chare 2's creation, it calls \kw{CkPrintf}, and the -message from chare 2 is displayed before the one from chare 1.} -} - -\function{void CkError(format [, arg]*))} \index{CkError} \index{input/output} -\desc{Like \kw{CkPrintf}, but used to print error messages on \texttt{stderr}.} - -\function{int CkScanf(format [, arg]*)} \index{CkScanf} \index{input/output} -\desc{This call is used for atomic terminal input. Its usage is similar to -{\tt scanf} in C. A call to \kw{CkScanf}, unlike \kw{CkPrintf}, -blocks all execution on the processor it is called from, and returns -only after all input has been retrieved. -} - -For \CC\ style stream-based I/O, \charmpp\ offers -\kw{ckout} and \kw{ckerr} in the place of cout, and cerr. The -\CC\ streams and their \charmpp\ equivalents are related in the same -manner as printf and scanf are to \kw{CkPrintf} and \kw{CkScanf}. The -\charmpp\ streams are all used through the same interface as the \CC\ -streams, and all behave in a slightly different way, just like C-style -I/O. index 01722746586c96c2076b771f39c51f4115400ef1..5f0e8b98894179654ef99fb58e4b329ea14f9145 100644 (file) @@ -1,50 +1,49 @@ - -\subsection{Load Balancing} - -\label{loadbalancing} - -%(This introduction added on 11/12/2003) - -Charm++ supports load balancing, enabled by the fact there are a large -number of chares or chare-array-elements typically available to map to -existing processors, and that they can be migrated at runtime. - -Many parallel applications, especially physical simulations, are -iterative in nature. They may contain a series of time-steps, and/or -iterative solvers that run to convergence. For such computations, -typically, the heuristic principle that we call "principle of -persistence" holds: the computational loads and communication patterns -between objects (chares) tend to persist over time, even in dynamic -applications. In such cases, recent past is a good predictor of near -future. Measurement-based chare migration strategies are useful in -this context. Currently these apply to chare-array elements, but they -may be extended to chares in the future. - -For applications without such iterative structure, or with iterative structure -but without the predictability (i.e. where the principle of persistence does -not apply), Charm++ supports "seed balancers" that move seeds for new chares -among processors (possibly repeatedly) to achieve load balance. These -strategies are currently available for both chares and chare-arrays. Seed -balancers were the original load balancers provided in Charm since the late -'80s. They are extremely useful for state-space search applications, and are -also useful in other computations, as well as in conjunction with migration +Load balancing in \charmpp{} is enabled by its ability to place, or +migrate, chares or chare array elements. Typical application usage to +exploit this feature will construct many more chares than processors, and +enable their runtime migration. + +Iterative applications, which are commonplace in physical simulations, +are the most suitable target for \charmpp{}'s measurement based load +balancing techniques. Such applications may contain a series of +time-steps, and/or iterative solvers that run to convergence. For such +computations, typically, the heuristic principle that we call +principle of persistence'' holds: the computational loads and +communication patterns between objects (chares) tend to persist over +multiple iterations, even in dynamic applications. In such cases, +the recent past is a good predictor of the near future. Measurement-based +chare migration strategies are useful in this context. Currently these +apply to chare-array elements, but they may be extended to chares in +the future. + +For applications without such iterative structure, or with iterative +structure, but without predictability (i.e. where the principle of +persistence does not apply), Charm++ supports seed balancers'' that +move seeds'' for new chares among processors (possibly repeatedly) +to achieve load balance. These strategies are currently available for +both chares and chare-arrays. Seed balancers were the original load +balancers provided in Charm since the late 80's. They are extremely +useful for state-space search applications, and are also useful in +other computations, as well as in conjunction with migration strategies. -For iterative computations when there is a correlation between iterations/steps -but either it is not strong or the machine environment is not predictable -(noise due to OS interrupts on small time steps, or time-shared desk-top +For iterative computations when there is a correlation between iterations/steps, +but either it is not strong, or the machine environment is not predictable +(due to noise from OS interrupts on small time steps, or time-shared desktop machines), one can use a combination of the two kinds of strategies. The -base-line load balancing is provided by migration strategies; But in each +baseline load balancing is provided by migration strategies, but in each iteration one also spawns off work in the form of chares that can run on any processor. The seed balancer will handle such work as it arises. -\subsubsection{Measurement-based Object Migration Strategies} +Examples are in \examplerefdir{load\_balancing} and +\testrefdir{load\_balancing} +\section{Measurement-based Object Migration Strategies} \label{lbFramework} \label{migrationlb} In \charmpp{}, objects (except groups, nodegroups) can migrate from -processor to processor at run-time. Object migration can potentially +processor to processor at runtime. Object migration can potentially improve the performance of the parallel program by migrating objects from overloaded processors to underloaded ones. @@ -58,20 +57,20 @@ overloaded processors to underloaded ones. which automatically instruments all \charmpp{} objects, collects computation load and communication structure during execution and stores them into a \kw{load balancing database}. \charmpp{} then provides a collection of \kw{load -balancing strategies} whose job is to decide on a new mapping of objects to +balancing strategies} whose job it is to decide on a new mapping of objects to processors based on the information from the database. Such measurement based -strategies are efficient when we can reasonably assume that objects in +strategies are efficient when we can reasonably assume that objects in a \charmpp{} application tend to exhibit temporal correlation in their computation and communication patterns, i.e. future can be to some extent predicted using the historical measurement data, allowing effective measurement-based load balancing without application-specific knowledge. -Here are the two terms often used in \charmpp{} load balancing framework: +Two key terms in the \charmpp{} load balancing framework are: \begin{itemize} % \item \kw{Load balancing database} provides the interface of almost all load balancing calls. On each processor, it stores the load balancing instrumented -data and coordinates the load balancing manager and balancer. it is implemented +data and coordinates the load balancing manager and balancer. It is implemented as a Chare Group called \kw{LBDatabase}. % \item \kw{Load balancer or strategy} takes the load balancing database and @@ -82,30 +81,29 @@ hierarchical load balancers. % \end{itemize} -\subsubsection{Available Load Balancing Strategies} - +\section{Available Load Balancing Strategies} \label{lbStrategy} -Load balancing can be performed in either a centralized, fully distributed -or hierarchical fashion. +Load balancing can be performed in either a centralized, a fully distributed, +or an hierarchical fashion. In centralized approaches, the entire machine's load and communication structure are accumulated to a single point, typically processor 0, followed by a decision making process to determine the new distribution of \charmpp objects. Centralized load balancing requires synchronization which may incur an overhead and delay. However, due to the fact that the decision process has a -high degree of the knowledge about the entire machine, it tends to be more +high degree of the knowledge about the entire platform, it tends to be more accurate. -In distributed approaches, machine states are only exchanged among +In distributed approaches, load data is only exchanged among neighboring processors. There is no global synchronization. However, they will not, in general, provide an immediate restoration for load balance - the process is iterated until the load balance can be achieved. In hierarchical approaches, processors are divided into independent autonomous sets of processor groups and these groups are organized in hierarchies, -therefore decentralizing the load balancing task. Different strategies can be -used to load balancing load on processors inside each processor group, and +thereby decentralizing the load balancing task. Different strategies can be +used to balance the load on processors inside each processor group, and processors across groups in a hierarchical fashion. Listed below are some of the available non-trivial centralized load balancers @@ -113,20 +111,20 @@ and their brief descriptions: \begin{itemize} \item {\bf RandCentLB}: Randomly assigns objects to processors; %\item {\bf RecBisectBfLB}: Recursively partition with Breadth first enumeration; -\item {\bf MetisLB}: Use Metis(tm) to partitioning object communication graph; -\item {\bf GreedyLB}: Use greedy algorithm, always pick the heaviest object to the least loaded processor. -\item {\bf GreedyCommLB}: Greedy algorithm which also takes communication graph into account; -\item {\bf TopoCentLB}: Greedy algorithm which also takes processor topology into account; -\item {\bf RefineLB}: Move objects away from the most overloaded processors to reach average, limits the number of objects migrated; -\item {\bf RefineCommLB}: Same idea as in RefineLB, but take communication into account; -\item {\bf RefineTopoLB}: Same idea as in RefineLB, but take processor topology into account; -\item {\bf ComboCentLB}: A special load balancer that can be used to combine any number of above centralized load balancers; +\item {\bf MetisLB}: Uses METIS\texttrademark\hspace{0mm} to partitioning object communication graph. +\item {\bf GreedyLB}: Uses a greedy algorithm that always assigns the heaviest object to the least loaded processor. +\item {\bf GreedyCommLB}: Extends the greedy algorithm to take the communication graph into account. +\item {\bf TopoCentLB}: Extends the greedy algorithm to take processor topology into account. +\item {\bf RefineLB}: Moves objects away from the most overloaded processors to reach average, limits the number of objects migrated. +\item {\bf RefineCommLB}: Same idea as in RefineLB, but takes communication into account. +\item {\bf RefineTopoLB}: Same idea as in RefineLB, but takes processor topology into account. +\item {\bf ComboCentLB}: A special load balancer that can be used to combine any number of centralized load balancers mentioned above. \end{itemize} Listed below are the distributed load balancers: \begin{itemize} \item {\bf NeighborLB}: A neighborhood load balancer in which each processor tries to average out its load only among its neighbors. -\item {\bf WSLB}: A load balancer for workstation clusters, which can detect load changes on desktops (and other timeshared processors) and adjust load without interferes with other's use of the desktop. +\item {\bf WSLB}: A load balancer for workstation clusters, which can detect load changes on desktops (and other timeshared processors) and adjust load without interfering with other's use of the desktop. \end{itemize} An example of a hierarchical strategy can be found in: @@ -135,8 +133,8 @@ An example of a hierarchical strategy can be found in: the root. \end{itemize} -Users can choose any load balancing strategy they think is good for their -application. The compiler and run-time options are described in +Users can choose any load balancing strategy they think is appropriate for their +application. The compiler and runtime options are described in section~\ref{lbOption}. %In some cases, one may need to create and invoke multiple load balancing @@ -145,13 +143,13 @@ section~\ref{lbOption}. %an aggressive load balancer such as GreedyRefLB in the first load balancing %step, and use RefineLB for the later load balancing steps. -\subsubsection{Load Balancing Chare Arrays} +\section{Load Balancing Chare Arrays} \label{lbarray} The load balancing framework is well integrated with chare array implementation -- when a chare array is created, it automatically registers its elements with -the load balancing framework. The instrumentation of compute time (wall/cpu -time) and communication pattern are done automatically and APIs are provided +the load balancing framework. The instrumentation of compute time (WALL/CPU +time) and communication pattern is done automatically and APIs are provided for users to trigger the load balancing. To use the load balancer, you must make your array elements migratable (see migration section above) and choose a \kw{load balancing strategy} (see the section \ref{lbStrategy} for a @@ -162,22 +160,24 @@ different needs of the applications. These methods are different in how and when a load balancing phase starts. The three methods are: {\bf periodic load balancing mode}, {\bf at sync mode} and {\bf manual mode}. -In {\em periodic load balancing mode}, a user just needs to specify how often -he wants the load balancing to occur, using +LBPeriod runtime option to specify -a time interval. - -In {\em sync mode}, users can tell the load balancer explicitly when is a good -time to trigger load balancing by inserting a function call (AtSync) in the -user code. - -In the above two load balancing modes, users do not need to worry about how to -start load balancing. However, in one scenario, the above automatic load -balancers will fail to work - when array elements are created by dynamic insertion. -This is because the above two load balancing modes require an application to -have fixed number of objects at the time of load balancing. The array manager -needs to maintain a head count of local array elements for the local barrier. -In this case, users have to use the {\em manual mode} to trigger load balancer -themselves. +In {\em periodic load balancing mode}, a user specifies only how often +load balancing is to occur, using +LBPeriod runtime option to specify +the time interval. + +In {\em at sync mode}, the application invokes the load balancer +explicitly at appropriate (generally at a pre-existing synchronization +boundary) to trigger load balancing by inserting a function call +(AtSync) in the application source code. + +In the prior two load balancing modes, users do not need to worry +about how to start load balancing. However, in one scenario, those +automatic load balancers will fail to work - when array elements are +created by dynamic insertion. This is because the above two load +balancing modes require an application to have fixed the number of +objects at the time of load balancing. The array manager needs to +maintain a head count of local array elements for the local barrier. +In this case, the application must use the {\em manual mode} to +trigger load balancer. The detailed APIs of these three methods are described as follows: % @@ -186,14 +186,14 @@ The detailed APIs of these three methods are described as follows: \item {\bf Periodical load balancing mode}: In the default setting, load balancing happens whenever the array elements are ready, with an interval of 1 second. It is desirable for the application to set a larger interval using -+LBPeriod runtime option. For example "+LBPeriod 5.0" can be used to start load ++LBPeriod runtime option. For example +LBPeriod 5.0'' can be used to start load balancing roughly every 5 seconds. By default, array elements may be asked to -migrate at any time provided that they are not in the middle of executing an +migrate at any time, provided that they are not in the middle of executing an entry method. The array element's variable \kw{usesAtSync} being CmiFalse attributes to this default behavior. % -\item {\bf At sync mode}: Using this method, elements can only be migrated at -certain points in the execution when user calls \kw{AtSync()}. For using the at +\item {\bf At sync mode}: Using this method, elements can be migrated only at +certain points in the execution when the application invokes \kw{AtSync()}. In order to use the at sync mode, one should set \kw{usesAtSync} to CmiTrue in the array element constructor. When an element is ready to migrate, call \kw{AtSync()}~\footnote{AtSync() is a member function of class ArrayElement}. @@ -213,7 +213,8 @@ balancing that it is time for load balancing. During the time between {\em AtSync} and {\em ResumeFromSync}, the object may be migrated. One can choose to let objects continue working with incoming messages, however keep in mind the object may suddenly show up in another processor and make sure no -operations that could possibly prevent migration be performed. This is the automatic way of doing load balancing where the application does not need to define ResumeFromSync(). +operations that could possibly prevent migration be performed. This is +the automatic way of doing load balancing where the application does not need to define ResumeFromSync(). The more commonly used approach is to force the object to be idle until load balancing finishes. The user places an AtSync call at the end of some iteration @@ -224,11 +225,11 @@ application. This manual way of using the at sync mode results in a barrier at load balancing (see example here~\ref{lbexample}). % \item {\bf Manual mode}: The load balancer can be programmed to be started -manually. To switch to the manual mode, you should call {\em TurnManualLBOn()} -on every processor to prevent load balancer from starting automatically. {\em +manually. To switch to the manual mode, the application calls {\em TurnManualLBOn()} +on every processor to prevent the load balancer from starting automatically. {\em TurnManualLBOn()} should be called as early as possible in the program. It could be called at the initialization part of the program, for example from a -global variable constructor, or in an initcall~\ref{initcall}. It can also be +global variable constructor, or in an initcall(Section~\ref{initcall}). It can also be called in the constructor of a static array and definitely before the {\em doneInserting} call for a dynamic array. It can be called multiple times on one processor, but only the last one takes effect. @@ -236,28 +237,54 @@ one processor, but only the last one takes effect. The function call {\em StartLB()} starts load balancing immediately. This call should be made at only one place on only one processor. This function is also not blocking, the object will continue to process messages and the load -balancing when triggered happens at the background. +balancing when triggered happens in the background. {\em TurnManualLBOff()} turns off manual load balancing and switches back to the automatic Load balancing mode. % \end{enumerate} -\subsubsection{Migrating objects} - +\section{Migrating objects} \label{lbmigobj} Load balancers migrate objects automatically. -For an array element to migrate, user can refer to section~\ref{arraymigratable} +For an array element to migrate, user can refer to Section~\ref{arraymigratable} for how to write a pup'' for an array element. In general one needs to pack the whole snapshot of the member data in an array element in the pup subroutine. This is because the migration of -the object may happen at any time. In certain load balancing scheme where -user explicitly control when the load balancing happens, user may choose -to pack only a part of the data and may skip those temporary data. +the object may happen at any time. In certain load balancing schemes where + the user explicitly controls when load balancing occurs, the user may choose +to pack only a part of the data and may skip temporary data. + +An array element can migrate by calling the \kw{migrateMe}(\uw{destination +processor}) member function-- this call must be the last action +in an element entry method. The system can also migrate array elements +for load balancing (see the section~\ref{lbarray}). + +To migrate your array element to another processor, the \charmpp{} +runtime will: + +\begin{itemize} +\item Call your \kw{ckAboutToMigrate} method +\item Call your \uw{pup} method with a sizing \kw{PUP::er} to determine how +big a message it needs to hold your element. +\item Call your \uw{pup} method again with a packing \kw{PUP::er} to pack +your element into a message. +\item Call your element's destructor (deleting the old copy). +\item Send the message (containing your element) across the network. +\item Call your element's migration constructor on the new processor. +\item Call your \uw{pup} method on with an unpacking \kw{PUP::er} to unpack +the element. +\item Call your \kw{ckJustMigrated} method +\end{itemize} + +Migration constructors, then, are normally empty-- all the unpacking +and allocation of the data items is done in the element's \uw{pup} routine. +Deallocation is done in the element destructor as usual. -\subsubsection{Other utility functions} + +\section{Other utility functions} There are several utility functions that can be called in applications to configure the load balancer, etc. These functions are: @@ -270,7 +297,7 @@ configure the load balancer, etc. These functions are: Fortran interface: {\bf FLBTURNINSTRUMENTON()} and {\bf FLBTURNINSTRUMENTOFF()}. \item {\bf setMigratable(CmiBool migratable)}: is a member function of array element. This function can be called - in an array element constructor to tell load balancer whether this object + in an array element constructor to tell the load balancer whether this object is migratable or not\footnote{Currently not all load balancers recognize this setting though.}. \item {\bf LBSetPeriod(double s)}: this function can be called @@ -291,26 +318,25 @@ LBSetPeriod(5.0); Alternatively, one can specify +LBPeriod \{seconds\} at command line. \end{itemize} -\subsubsection{Compiler and run-time options to use load balancing module} - +\section{Compiler and runtime options to use load balancing module} \label{lbOption} Load balancing strategies are implemented as libraries in \charmpp{}. This allows programmers to easily experiment with different existing strategies by simply linking a pool of strategy modules and choosing -one to use at run-time via a command line option. +one to use at runtime via a command line option. -Please note that linking a load balancing module is different from activating it: +{\bf Note:} linking a load balancing module is different from activating it: \begin{itemize} -\item link a LB module: is to link a Load Balancer module(library) at - compile time; You can link against multiple LB libraries as candidates. -\item activate a LB: is to actually ask at run-time to create a LB strategy and +\item link an LB module: is to link a Load Balancer module(library) at + compile time. You can link against multiple LB libraries as candidates. +\item activate an LB: is to actually ask the runtime to create an LB strategy and start it. You can only activate load balancers that have been linked at compile time. \end{itemize} -Below are the descriptions about the compiler and run-time options: +Below are the descriptions about the compiler and runtime options: \begin{enumerate} \item {\bf compile time options:} @@ -318,44 +344,52 @@ Below are the descriptions about the compiler and run-time options: \begin{itemize} \item {\em -module NeighborLB -module GreedyCommLB ...} \\ links the modules NeighborLB, GreedyCommLB etc into an application, but these -load balancers will remain inactive at execution time unless overriden by other +load balancers will remain inactive at execution time unless overridden by other runtime options. \item {\em -module CommonLBs} \\ - links a special module CommonLBs which includes some commonly used charmpp{} -built-in load balancers. + links a special module CommonLBs which includes some commonly used \charmpp{} +built-in load balancers. The commonly used load balancers include {\tt +BlockLB, CommLB, DummyLB, GreedyAgentLB, GreedyCommLB, GreedyLB, +NeighborCommLB, NeighborLB, OrbLB, PhasebyArrayLB, RandCentLB, +RecBipartLB, RefineLB, RefineCommLB, RotateLB, TreeMatchLB, RefineSwapLB, CommAwareRefineLB}. \item {\em -balancer GreedyCommLB} \\ - links the load balancer GreedyCommLB and invokes this load balancer at -runtime. + links the load balancer GreedyCommLB and invokes it at runtime. \item {\em -balancer GreedyCommLB -balancer RefineLB} \\ invokes GreedyCommLB at the first load balancing step and RefineLB in all subsequent load balancing steps. \item {\em -balancer ComboCentLB:GreedyLB,RefineLB} \\ - One can choose to create a new combination load balancer made of multiple + You can create a new combination load balancer made of multiple load balancers. In the above example, GreedyLB and RefineLB strategies are applied one after the other in each load balancing step. \end{itemize} -The list of existing load balancers are in section \ref{lbStrategy}. Note: you -can have multiple -module *LB options. LB modules are linked into a program, -but they are not activated automatically at runtime. Using -balancer at -compile time in order to activate load balancers automatically at run time. -Having -balancer A implies -module A, so you don't have to write -module A -again, although it does not hurt. Using CommonLBs is a convenient way to link -against the commonly used existing load balancers. One of the load balancers -called MetisLB requires the METIS library which is located at: -charm/src/libs/ck-libs/parmetis/METISLib/. You need to compile METIS library -by "make METIS" under charm/tmp after you compile Charm++. +The list of existing load balancers is given in Section +\ref{lbStrategy}. Note: you can have multiple -module *LB options. LB +modules are linked into a program, but they are not activated +automatically at runtime. Using -balancer A at compile time will +activate load balancer A automatically at runtime. Having -balancer A +implies -module A, so you don't have to write -module A again, +although that is not invalid. Using CommonLBs is a convenient way to +link against the commonly used existing load balancers. One such load +balancer, called MetisLB, requires the METIS library which is located +at: + +{\tt charm/src/libs/ck-libs/parmetis/METISLib/.} -\item {\bf run-time options:} +A pre-requisite for use of this library is to compile the METIS +library by make METIS'' under charm/tmp after compiling \charmpp{}. -Run-time options are similar to the compile time options as described above, -but they can override compile time options. +\item {\bf runtime options:} + +Runtime balancer selection options are similar to the compile time +options as described above, but they can be used to override those +compile time options. \begin{itemize} \item {\em +balancer help} \\ displays all available balancers that have been linked in. \item {\em +balancer GreedyCommLB} \\ - invoked GreedyCommLB + invokes GreedyCommLB \item {\em +balancer GreedyCommLB +balancer RefineLB} \\ invokes GreedyCommLB at the first load balancing step and RefineLB in all subsequent load balancing steps. @@ -363,227 +397,103 @@ subsequent load balancing steps. same as the example in the -balancer compile time option. \end{itemize} -Note: +balancer option works only if you have already linked the load balancers module at compile time. +Note: +balancer option works only if you have already linked the corresponding +load balancers module at compile time. Giving +balancer with a wrong LB name will result in a runtime error. -When you have used -balancer A as compile time option, you don't need to use +When you have used -balancer A as compile time option, you do not need to use +balancer A again to activate it at runtime. However, you can use +balancer B to override the compile time option and choose to activate B instead of A. -\item {\bf When there is no load balancer activated} +\item {\bf Handling the case that no load balancer is activated by users} -When you don't activate any of the load balancers at compile time or run time -and your program counts on a load balancer because you use {\em AtSync()} +When no balancer is linked by users +but the program counts on a load balancer because it used {\em AtSync()} and expect {\em ResumeFromSync()} to be called to continue, -be assured that your program can still run. -A special load balancer called {\em NullLB} is -automatically created in this case which just -calls {\em ResumeFromSync()} after {\em AtSync()}. -This default load balancer keeps a program from hanging after calling {\em AtSync()}. -The {\em NullLB} is smart enough to keep silent if another -load balancer is created. +a special load balancer called {\em NullLB} will be +automatically created to run the program. +This default load balancer calls {\em ResumeFromSync()} after {\em AtSync()}. +It keeps a program from hanging after calling {\em AtSync()}. +{\em NullLB} will be suppressed if another load balancer is created. -\item {\bf Other useful run-time options} +\item {\bf Other useful runtime options} -There are a few other run-time options for load balancing that may be useful: +There are a few other runtime options for load balancing that may be useful: \begin{itemize} \item {\em +LBDebug \{verbose level\}} \\ - \{verbose level\} can be any positive integer number. 0 to turn off - This option asks load balancer to output more information to stdout -about load balancing. The bigger the verbose level, the more verbose the output is. + \{verbose level\} can be any positive integer number. 0 is to turn off the verbose + This option asks load balancer to output load balancing information to stdout. + The bigger the verbose level is, the more verbose the output is. \item {\em +LBPeriod \{seconds\}} \\ - \{seconds\} can be any float number. This sets the minimum period time in + \{Seconds\} can be any float number. This option sets the minimum period time in seconds between two consecutive load balancing steps. The default value is -1 second. That is to say a second load balancing step won't happen until -after 1 second since the last load balancing step. +1 second. That is to say that a load balancing step will not happen until +1 second after the last load balancing step. \item {\em +LBSameCpus} \\ - this option simply tells load balancer that all processors are of same speed. The load balancer will then skip the measurement of CPU speed at run-time. + This option simply tells load balancer that all processors are of same speed. + The load balancer will then skip the measurement of CPU speed at runtime. \item {\em +LBObjOnly} \\ - this tells load balancer to ignore processor background load when making migration decisions. + This tells load balancer to ignore processor background load when making migration decisions. \item {\em +LBSyncResume} \\ - after load balancing step, normally a processor can resume computation + After load balancing step, normally a processor can resume computation once all objects are received on that processor, even when other processors are still working on migrations. If this turns out to be a problem, that is when some processors start working on computation while the other -processors are still busy migrating objects, then use this option to force -a global barrier on all processors to make sure processors can only resume -computation after migrations finish on all processors. +processors are still busy migrating objects, then this option can be used to force +a global barrier on all processors to make sure that processors can only resume +computation after migrations are completed on all processors. \item {\em +LBOff} \\ - Turns off load balancing instrumentation at startup time. This call turns -off the instrument of both CPU and communication usage. + This option turns off load balancing instrumentation + of both CPU and communication usage at startup time. \item {\em +LBCommOff} \\ - Turns off load balancing instrumentation of communication at startup time. -The instrument of CPU usage is left on. -\end{itemize} - -\end{enumerate} - -\subsubsection{Load Balancing Simulation} - -The simulation feature of load balancing framework allows the users to collect information -about the compute wall/cpu time and communication of the chares during a particular run of -the program and use this information to later test different load balancing strategies to -see which one is suitable for the programs behaviour. Currently, this feature is supported only for -the centralized load balancing strategies. For this, the load balancing framework -accepts the following command line options: -\begin{enumerate} -\item {\em +LBDump StepStart}\\ - This will dump the instrument/communication data collected by the load balancing framework - starting from the load balancing step {\em StepStart} into a file on the disk. The name of the file - is given by the {\em +LBDumpFile} option. The first step in the program is number 0. Negative - numbers will be converted to 0. -\item {\em +LBDumpSteps StepsNo}\\ - This option specifies the number of steps for which data will be dumped to disk. If omitted, default value is 1. - The program will exit after StepsNo files are dumped. -\item {\em +LBDumpFile FileName}\\ - This option specifies the base name of the file into which the load balancing data is dumped. If this - option is not specified, the framework uses the default file {\tt lbdata.dat}. Since multiple steps are allowed, - a number is appended to the filename in the form {\tt Filename.\#}; this applies to both dump and - simulation. -\item {\em +LBSim StepStart}\\ - This option instructs the framework to do the simulation during the first load balancing step. - When this option is specified, the load balancing data from the file specified in the {\em +LBDumpFile} - option, with the addition of the step number, will be read and this data - will be used for the load balancing. The program will print the results - of the balancing for a number of steps given by the {\em +LBSimSteps} option, and then will exit. -\item {\em +LBSimSteps StepsNo}\\ - This option has the same meaning of {\em +LBDumpSteps}, except that apply for the simulation mode. - Default value is 1. -\item {\em +LBSimProcs}\\ - This option may change the number of processors target of the load balancer strategy. It may be used to test - the load balancer in conditions where some processor crashes or someone becomes available. If this number is not - changed since the original run, starting from the second step file the program will print other additional - information about how the simulated load differs from the real load during the run (considering all - strategies that were applied while running). This may be used to test the validity of a load balancer - prediction over the reality. If the strategies used during run and simulation differ, the additional data - printed may not be useful. -\end{enumerate} -As an example, we can collect the data for a 1000 processor run of a program using: -\begin{alltt} -./charmrun pgm +p 1000 +balancer RandCentLB +LBDump 2 +LBDumpSteps 4 +LBDumpFile dump.dat -\end{alltt} -This will collect data on files data.dat.{2,3,4,5}. Then, we can use this data to observe various centralized strategies using: -\begin{alltt} -./charmrun pgm +balancer <Strategy to test> +LBSim 2 +LBSimSteps 4 +LBDumpFile dump.dat [+LBSimProcs 900] -\end{alltt} - -\subsubsection{Future load predictor} - -When objects do not follow the assumption that the future workload will be the -same as the past, the load balancer might not have the correct information to do -a correct rebalancing job. To prevent this the user can provide a transition -function to the load balancer to predict what will be the future workload, given -the past, instrumented one. As said, the user might provide a specific class -which inherits from {\tt LBPredictorFunction} and implement the appropriate functions. -Here is the abstract class: -\begin{alltt} -class LBPredictorFunction { -public: - int num_params; - virtual void initialize_params(double *x); - - virtual double predict(double x, double *params) =0; - virtual void print(double *params) {PredictorPrintf("LB: unknown model");}; - virtual void function(double x, double *param, double &y, double *dyda) =0; -}; -\end{alltt} -\begin{itemize} -\item {\tt initialize\_params} by default initializes the parameters randomly. If the user -knows how they should be, this function can be reimplemented. -\item {\tt predict} is the function the model implements. For example, if the function is -$y=ax+b$, the method in the implemented class should be like: -\begin{verbatim} -double predict(double x, double *param) {return (param[0]*x + param[1]);} -\end{verbatim} -\item {\tt print} is a debugging function and it can be reimplemented to have a meaningful -print of the learnt model -\item {\tt function} is a function internally needed to learn the parameters, {\tt x} and -{\tt param} are input, {\tt y} and {\tt dyda} are output (the computed function and -all its derivatives with respect to the parameters, respectively). -For the function in the example should look like: -\begin{verbatim} -void function(double x, double *param, double &y, double *dyda) { - y = predict(x, param); - dyda[0] = x; - dyda[1] = 1; -} -\end{verbatim} + This option turns off load balancing instrumentation of communication at startup time. + The instrument of CPU usage is left on. \end{itemize} -Other than these function, the user should provide a constructor which must initialize -{\tt num\_params} to the number of parameters the model has to learn. This number is -the dimension of {\tt param} and {\tt dyda} in the previous functions. For the given -example, the constructor is {\tt \{num\_params = 2;\}}. - -If the model behind the computation is not known, the user can leave the system to -use a predefined default function. -As seen, the function can have several parameters which will be learned during -the execution of the program. For this, two parameters can be setup at command -line to specify the learning behaviour: -\begin{enumerate} -\item {\em +LBPredictorWindow size}\\ -This parameter will specify how many statistics the load balancer will keep. -The greater this number is, the better the -approximation of the workload will be, but more memory is required to store -the intermediate information. The default is 20. -\item {\em +LBPredictorDelay steps}\\ -This will tell how many load balancer steps to wait before considering the -function parameters learnt and start using the mode. The load balancer will -collect statistics for a {\em +LBPredictorWindow} steps, but it will start using -the model as soon as {\em +LBPredictorDelay} information are collected. The -default is 10. \end{enumerate} -Moreover another flag can be set to enable the predictor from command line: {\em -+LBPredictor}.\\ -Other than the command line options, there are some methods -callable from user program to modify the predictor. These methods are: -\begin{itemize} -\item {\tt void PredictorOn(LBPredictorFunction *model);} -\item {\tt void PredictorOn(LBPredictorFunction *model,int wind);} -\item {\tt void PredictorOff();} -\item {\tt void ChangePredictor(LBPredictorFunction *model);} -\end{itemize} - - -\subsubsection{Seed load balancers - load balancing Chares at creation time} +\section{Seed load balancers - load balancing Chares at creation time} \label{seedlb} Seed load balancing involves the movement of object creation messages, or -"seeds", to create a balance of work across a set of processors. This load -balancing scheme is used for load balancing chares only at creation time. When -the chare is created on a processor, there is no movement of the chare due to -the seed load balancer. The measurement based load balancer described in -previous subsection perform the task of moving chares during work to achieve -load balance. - -Several variations of strategies have been designed and analyzed. +"seeds", to create a balance of work across a set of processors. +This seed load balancing scheme is used to balance chares at creation time. +After the chare constructor is executed on a processor, the seed balancer does not +migrate it. +%the seed load balancer. The measurement based load balancer described in +%previous subsection perform the task of moving chares during work to achieve +%load balance. +Depending on the movement strategy, several seed load balancers are available now. +Examples can be found \examplerefdir{NQueen}. \begin{enumerate} \item {\em random}\\ A strategy that places seeds randomly when they are created and does no movement of seeds thereafter. This is used as the default seed load balancer. \item {\em neighbor}\\ a strategy which imposes a virtual topology on the processors, - load exchange happens to neighbors only. The overloaded processors - initiate the load balancing, where a processor sends work to its neighbors A strategy which imposes a virtual topology on the processors, + load exchange happens among neighbors only. The overloaded processors + initiate the load balancing and send work to its neighbors when it becomes overloaded. The default topology is mesh2D, one can use command line option to choose other topology such as ring, mesh3D and dense graph. \item {\em spray}\\ a strategy which imposes a spanning tree organization on the processors, A strategy which imposes a spanning tree organization on the processors, results in communication via global reduction among all processors to compute global average load via periodic reduction. It uses averaging of loads to determine how seeds should be distributed. +\item {\em workstealing} \\ + A strategy that the idle processor requests a random processor and steal + chares. \end{enumerate} -Other strategies can also be explored follow the simple API of the +Other strategies can also be explored by following the simple API of the seed load balancer. \linebreak +\zap{ {\bf Seed load balancers for Chares:} Seed load balancers can be directly used for load balancing Chares. @@ -622,8 +532,12 @@ For initially populating the array with chares at time of creation the API is as The details about array creation are explained in section~\ref{advanced arrays} of the manual. +} % end zap + + {\bf Compile and run time options for seed load balancers} + To choose a seed load balancer other than the default {\em rand} strategy, use link time command line option {\bf -balance foo}. @@ -637,7 +551,7 @@ under charm/lib, named as {\em libcldb-foo.a}, where {\em foo} is the strategy name used above. Now one can use {\bf -balance foo} as compile time option to {\bf charmc} to link with the {\em foo} seed load balancer. -\subsubsection{Simple Load Balancer Usage Example - Automatic with Sync LB} +\section{Simple Load Balancer Usage Example - Automatic with Sync LB} \label{lbexample} A simple example of how to use a load balancer in sync mode in one's @@ -645,20 +559,20 @@ application is presented below. \begin{alltt} /*** lbexample.ci ***/ -mainmodule lbexample { +mainmodule lbexample \{ readonly CProxy_Main mainProxy; readonly int nElements; - mainchare Main { + mainchare Main \{ entry Main(CkArgMsg *m); entry void done(void); - }; + \}; - array [1D] LBExample { + array [1D] LBExample \{ entry LBExample(void); entry void doWork(); - }; -}; + \}; +\}; \end{alltt} -------------------------------------------------------------------------------- @@ -676,53 +590,53 @@ mainmodule lbexample { /*mainchare*/ class Main : public CBase_Main -{ +\{ private: int count; public: Main(CkArgMsg* m) - { + \{ /*....Initialization....*/ mainProxy = thisProxy; CProxy_LBExample arr = CProxy_LBExample::ckNew(nElements); arr.doWork(); - }; + \}; void done(void) - { + \{ count++; if(count==nElements){ CkPrintf("All done"); CkExit(); - } - }; -}; + \} + \}; +\}; /*array [1D]*/ class LBExample : public CBase_LBExample -{ +\{ private: int workcnt; public: LBExample() - { + \{ workcnt=0; /* May initialize some variables to be used in doWork */ //Must be set to CmiTrue to make AtSync work usesAtSync=CmiTrue; - } + \} - LBExample(CkMigrateMessage *m) { /* Migration constructor -- invoked when chare migrates */ } + LBExample(CkMigrateMessage *m) \{ /* Migration constructor -- invoked when chare migrates */ \} /* Must be written for migration to succeed */ - void pup(PUP::er &p){ + void pup(PUP::er &p)\{ CBase_LBExample::pup(p); p|workcnt; /* There may be some more variables used in doWork */ } void doWork() - { + \{ /* Do work proportional to the chare index to see the effects of LB */ workcnt++; @@ -733,12 +647,12 @@ public: AtSync(); else doWork(); - } + \} - void ResumeFromSync(){ + void ResumeFromSync()\{ doWork(); - } -}; + \} +\}; #include "lbexample.def.h" \end{alltt} diff --git a/doc/charm++/machineModel.tex b/doc/charm++/machineModel.tex new file mode 100644 (file) index 0000000..5ae4261 --- /dev/null @@ -0,0 +1,68 @@ +\section{Machine Model} +\label{machineModel} +\label{sec:machine} +At its basic level, \charmpp{} machine model is very simple: Think of +each chare as a separate processor by itself. The methods of each +chare can access its own instance variables (which are all private, at +this level), and any global variables declared as {\em readonly}. It +also has access to the names of all other chares (the global object +space''), but all that it can do with that is to send asynchronous +remote method invocations towards other chare objects. (Of course, the +instance variables can include as many other regular C++ objects that +it has''; but no chare objects. It can only have references to other +chare objects). + +In accordance with this vision, the first part of the manual (up to +and including the chapter on load balancing) has almost no mention of +entities with physical meanings (cores, nodes, etc.). The runtime +system is responsible for the magic of keeping closely communicating +objects on nearby physical locations, and optimizing communications +within chares on the same node or core by exploiting the physically +available shared memory. The programmer does not have to deal with +this at all. The only exception to this pure model in the basic part +are the functions used for finding out which processor'' an object +is running on, and for finding how many total processors are there. + +However, for implementing lower level libraries, and certain optimizations, +programmers need to be aware of processors. In any case, it is useful +to understand how the \charmpp{} implementation works under the hood. So, +we describe the machine model, and some associoated terminology here. + +In terms of physical resources, we assume the parallel machine +consists of one or more {\em nodes}, where a node is a largest unit +over which cache coherent shared memory is feasible (and therefore, +the maximal set of cores per which a single process {\em can} run. +Each node may include one or more processor chips, with shared or +private caches between them. Each chip may contain multiple cores, and +each core may support multiple hardware threads (SMT for example). + +\charmpp{} recognizes two logical entities: a PE (processing element) and +a node''. In a \charmpp{} program, a PE is a +unit of mapping and scheduling: each PE has a scheduler with an +associated pool of messages. Each chare is assumed to reside on one PE +at a time. Depending on the runtime command-line parmeters, a PE may +be associated with a subset of cores or hardware threads. One or more PEs +make up of a logical node'', the unit that one may partition a physical node +into. In the implementation, a separate +process exists for each logical node and all PEs within the logical node share +the same memory address space. The \charmpp{} runtime system optimizes +communication within a logical node by using shared memory. +%one may partition a +%physical node into one or more logical nodes. What \charmpp{} calls a +%node'' is this logical node. + +For example, on a machine with 16-core nodes, where each core has two +hardware threads, one may launch a \charmpp{} program with one or multiple +(logical) nodes per physical node. One may choose 32 PEs per (logical) node, +and one logical node per physical node. Alternatively, one can launch +it with 12 PEs per logical node, and 1 logical node per physical +node. One can also choose to partition the physical node, and launch +it with 4 logical nodes per physical node (for example), and 4 PEs per +node. It is not a general practice in \charmpp{} to oversubscribe the underlying +physical cores or hardware threads on each node. In other words, a +\charmpp{} program is usually not launched with more PEs than there +are physical cores or hardware threads allocated to it. More information about +these launch time options are provided in Appendix~\ref{sec:run}. +And utility functions to retrieve the information about those +\charmpp{} logical machine entities in user programs can be refered +in section~\ref{basic utility fns}. index c1146a3f2ecadf904cfe855f9d73eaa4031bb16d..b9033b0461f1a03fbbc0e10099c2642456317d72 100644 (file) @@ -1,4 +1,4 @@ -\documentclass[10pt]{article} +\documentclass[10pt]{report} \usepackage{../pplmanual} \input{../pplmanual} \usepackage{html} \usepackage{listings} \providecommand{\lstset}[2][]{} \end{htmlonly} - -\title{The\\ \charmpp\\ Programming Language\\ Manual} \version{6.0 - (Release 1)} \credits{ {\small The Charm software was developed as a - group effort. The earliest prototype, Chare Kernel(1.0), was - developed by Wennie Shu and Kevin Nomura working with Laxmikant - Kale. The second prototype, Chare Kernel(2.0), a complete - re-write with major design changes, was developed by a team - consisting of Wayne Fenton, Balkrishna Ramkumar, Vikram Saletore, - Amitabh B. Sinha and Laxmikant Kale. The translator for Chare - Kernel(2.0) was written by Manish Gupta. Charm(3.0), with - significant design changes, was developed by a team consisting of - Attila Gursoy, Balkrishna Ramkumar, Amitabh B. Sinha and - Laxmikant Kale, with a new translator written by Nimish Shah. The - \charmpp\ implementation was done by Sanjeev Krishnan. Charm(4.0) - included \charmpp\ and was released in fall 1993. Charm(4.5) was - developed by Attila Gursoy, Sanjeev Krishnan, Milind Bhandarkar, - Joshua Yelon, Narain Jagathesan and Laxmikant Kale. Charm(4.8), - developed by the same team included Converse, a parallel runtime - system that allows interoperability among modules written using - different paradigms within a single application. \charmpp\ runtime - system was re-targetted at Converse. Syntactic extensions in - \charmpp\ were dropped, and a simple interface translator was - developed (by Sanjeev Krishnan and Jay DeSouza) that, along with - the \charmpp\ runtime, became the \charmpp\ language. Charm - (5.4R1) included the following: a complete rewrite of the - \charmpp\ runtime system (using \CC) and the interface translator - (done by Milind Bhandarkar), several new features such as Chare - Arrays (developed by Robert Brunner and Orion Lawlor), various - libraries (written by Terry Wilmarth, Gengbin Zheng, Laxmikant - Kale, Zehra Sura, Milind Bhandarkar, Robert Brunner, and Krishnan - Varadarajan.) A coordination language Structured Dagger'' was - been implemented on top of \charmpp\ (Milind Bhandarkar), dynamic - seed-based load balancing (Terry Wilmarth and Joshua Yelon), a - client-server interface for Converse programs, and debugging - support by Parthasarathy Ramachandran, Jeff Wright, and Milind - Bhandarkar, Projections, the performance visualization and - analysis tool, was redesigned and rewritten using Java by Michael - Denardo. The test suite for \charmpp\ was developed by Michael - Lang, Jackie Wang, and Fang Hu. Converse was been ported to ASCI - Red (Joshua Yelon), Cray T3E (Robert Brunner), and SGI Origin2000 - (Milind Bhandarkar). For the current version Charm 6.0 (R1), - Converse has been ported to new platforms including BlueGene/[LP] - (Kumar, Huang, Bhatele), Cray XT3/4 (Zheng), Apple G5, Myrinet - (Zheng), and Infiniband (Chakravorty). Charm 6.0 introduces a - dedicated no network SMP multicore Converse layer for stand-alone - workstation experimenters (Zheng, Chakravorty, Kale, Jetley). - Charm 6.0 also includes cross platform network topology aware - chare placement for 3D tori and mesh networks (Kumar, Huang, - Bhatele, Bohm). The test suite was extended for automated testing - on all supported platforms by Gengbin Zheng. The Projection tool - was substantially improved by Chee Wai Lee and Isaac Dooley. The - Control Point performance tuning framework was created by Isaac - Dooley. Debugging support was enhanced with memory inspection - features by Filippo Gioachin. The Charisma orchestration language - was implemented on top of Charm++ by Chao Huang and Sanjay Kale. - Sanjay Kale, Orion Lawlor, Gengbin Zheng, Terry Wilmarth, Filippo - Gioachin, Sayantan Chakravorty, Chao Huang, David Kunzman, Isaac - Dooley, Eric Bohm, Sameer Kumar, Chao Mei, Pritish Jetley, and - Abhinav Bhatele, have been responsible for the changes to the - system since the last release. } } - \begin{document} +\title{The\\ \charm\\ Parallel Programming System\\ Manual} +\version{6.4.0} +\credits{\hspace{0 in}} \maketitle -\input{intro} +\chapter{Basic Concepts} +\input{intro} \input{overview} +\input{machineModel} + +\part{Basic \charm Programming} -\section{The \charmpp\ Language} +\chapter{Program Structure, Compilation and Utilities} \input{modules} - \input{entry} + \input{utilities} + \input{helloworld} + +\chapter{Basic Syntax} \input{marshalling} - \input{messages} - \input{order.tex} \input{chares} - \input{readonly} + \input{readonly} + +\chapter{Chare Arrays} \input{arrays} + +\chapter{Structured Control Flow: Structured Dagger} +\label{sec:sdag} + \input{sdag} + +\chapter{Serialization Using the PUP Framework} + \input{pup} + +\chapter{Load Balancing} +\label{loadbalancing} + \input{loadb} + +\chapter{Processor-Aware Chare Collections} \input{groups} \input{nodegroups} - \input{loadb} - \input{advancedlb} + +\chapter{Initializations at Program Startup} + \input{startuporder} + +\part{Advanced Programming Techniques} + +\chapter{Optimizing Entry Method Invocation} + \input{messages} + \input{entry} + \input{order} + +\chapter{Callbacks} + \input{callbacks} + +\chapter{Waiting for Completion} + %\section{Asynchronous Barriers} + \input{threaded} + \input{sync} \input{futures} \input{quiesce} + +\chapter{More Chare Array Features} +\label{advanced arrays} + \input{advancedarrays} + +\chapter{Sections: Subsets of a Chare Array} +\label{array section} + \input{sections} + +\chapter{Chare Inheritance and Templates} +\label{inheritance and templates} + \input{inhertmplt} + +\chapter{Collectives} \input{reductions} - \input{callbacks} - \input{pup} - \input{io} - \input{othercalls} - \input{delegation} - \input{commlib} - \input{alltoall} +% \input{alltoall} TO BE UPDATED WITH NDMESHSTREAMER INTERFACE + +\chapter{Serializing Complex Types} + \input{advancedpup} + +\chapter{Querying Network Topology} +\index{topomanager} +\index{cputopology} +\label{topo} + \input{topology} + +\chapter{Checkpoint/Restart-Based Fault Tolerance} +\index{Checkpoint/Restart} +\label{sec:checkpoint} + \input{checkpoint} + +%chapter{Managing Hardware Heterogeneity} +%\index{accel} +%\label{sec:hetero} +% input{hetero} + + +\part{Expert-Level Functionality} + +\chapter{Tuning and Developing Load Balancers} +\label{advancedlb} + \input{advancedlb} + +\chapter{Dynamic Code Injection} +\label{python} \input{python} -\input{inhertmplt} -\input{msa.tex} +\chapter{Intercepting Messages via Delegation} +\index{Delegation} +\label{delegation} + \input{delegation} + +% TO BE MOVED TO ITS OWN MANUAL +%\input{msa.tex} + -\input{checkpoint} +\part{Experimental Features} -\input{controlpoints} +\chapter{Control Point Automatic Tuning} +\index{Control Point Automatic Tuning} +\label{sec:controlpoint} + \input{controlpoints} +\chapter{Support for Loop-level Parallelism} +\index{ckloop} +\label{sec:ckloop} +\input{ckloop} +\chapter{Charm-MPI Interoperation} +\index{MPI Interoperation} +\label{sec:interop} + \input{mpi-interop} + +\part{Appendix} \appendix -\input{sdag} +% TO BE MOVED TO ITS OWN MANUAL +%\input{quickbigsim} + +\chapter{Installing \charm} +\label{sec:install} + \index{build} + \input{install} + +\chapter{Compiling \charm Programs} +\label{sec:compile} + \index{charmc} + \input{compile} + +\chapter{Running \charm Programs} +\label{sec:run} + \index{charmrun} + \input{run} + +\chapter{Performance Tracing for Analysis} +\index{projections} +\label{sec:trace-projections} + \input{../projections/tracing} + -\input{quickbigsim} +\chapter{History} + \input{history} -\input{further} +\chapter {Acknowledgements} + \input{credits} \input{index} index d567f950649fa24f70c9b70f6552bc1e0d64be44..41cb79a525fb610a9aea9e58b6e8dc1bd2450f6d 100644 (file) @@ -1,16 +1,27 @@ -\subsection{Parameter Marshalling} +\section{Entry Methods} +\label{entry} + +Member functions in the user program which function as entry methods have to be +defined in public scope within the class definition. +Entry methods typically do not return data and have a void'' return type. +An entry method with the same name as its enclosing class is a constructor entry method +and is used to create or spawn chare objects during execution. +Class member functions are annotated as entry methods by declaring them in the +the interface file as: +\begin{alltt} +entry void \uw{Entry1}(\uw{parameters}); +\end{alltt} + +\uw{Parameters} is either a list of serializable parameters, (e.g., int i, +double x''), or a message type (e.g., MyMessage *msg''). +Since parameters get marshalled into a message before being sent across the +network, in this manual we use message'' to mean either a message type or a +set of marshalled parameters. -\label{marshalling} +%Constructors in \CC have no return type. +%Finally, sync methods, described below, may return a message. -\experimental{} -In \charmpp, \index{chare}chares, \index{group}groups and \index{nodegroup} -nodegroups communicate by invoking each others methods. -The methods may either take several parameters, described here; -or take a special message object as described in the next section. -Since parameters get marshalled into a message before being -sent across the network, in this manual we use message'' -to mean either a literal message object or a set of marshalled -parameters. +Messages are lower level, more efficient, more flexible to use than parameter marshalling. For example, a chare could have this entry method declaration in the interface ({\tt .ci}) file: @@ -18,7 +29,7 @@ the interface ({\tt .ci}) file: entry void foo(int i,int k); \end{alltt} Then invoking foo(2,3) on the chare proxy will eventually -invoke foo(2,3) on the remote chare. +invoke foo(2,3) on the chare object. Since \charmpp\ runs on distributed memory machines, we cannot pass an array via a pointer in the usual \CC\ way. Instead, @@ -67,67 +78,8 @@ pup routine, a PUPbytes'' declaration, or a working operator|. See the PUP description in Section~\ref{sec:pup} for more details on these routines. -\begin{alltt} -//Declarations: -class point3d \{ -public: - double x,y,z; - void pup(PUP::er &p) \{ - p|x; p|y; p|z; - \} -\}; - -typedef struct \{ - int refCount; - char data[17]; -\} refChars; -PUPbytes(date); - -class date \{ -public: - char month,day; - int year; - //...non-virtual manipulation routines... -\}; -inline void operator|(PUP::er &p,date &d) \{ - p|d.month; p|d.day; - p|d.year; -\} - -//In the .ci file: - entry void pointRefOnDate(point3d &p,refChars r[d.year],date &d); -\end{alltt} - Any user-defined types in the argument list must be declared before including the .decl.h'' file. As usual in \CC, it is often dramatically more efficient to pass -a large structure by reference (as shown) than by value. - -For efficiency, arrays (like \uw{refChars} above) are always copied -as blocks of bytes and passed via pointers. This means classes -that need their pup routines to be called, such as those with dynamically -allocated data or virtual methods cannot be passed as arrays--use CkVec -or STL vectors to pass lists of complicated user-defined classes. -For historical reasons, pointer-accessible structures -cannot appear alone in the parameter list (because they are confused -with messages). - -The order of marshalling operations on the send side is: -\begin{itemize} -\item Call p\verb.|.a'' on each marshalled parameter with a sizing PUP::er. -\item Compute the lengths of each array. -\item Call p\verb.|.a'' on each marshalled parameter with a packing PUP::er. -\item \kw{memcpy} each arrays' data. -\end{itemize} - -The order of marshalling operations on the receive side is: -\begin{itemize} -\item Create an instance of each marshalled parameter using its default constructor. -\item Call p\verb.|.a'' on each marshalled parameter using an unpacking PUP::er. -\item Compute pointers into the message for each array. -\end{itemize} +a large structure by reference than by value. -Finally, very large structures are most efficiently passed via messages, -because messages are an efficient, low-level construct that minimizes copying -and overhead; but very complicated structures are easiest to pass via -marshalling, because marshalling uses the high-level pup framework. index 95f945ff51099f69a608b12e577dca8901837aeb..5202e9724e5d00fdfa7fa5945f1322d32928c25d 100644 (file) -\subsection{Messages} +\section{Messages} \label{messages} +Although \charmpp{} supports automated parameter marshalling for entry methods, +you can also manually handle the process of packing and unpacking parameters by +using messages. +%By using messages, you can potentially improve performance by +%avoiding unnecessary copying. A message encapsulates all the parameters sent to an entry method. Since the parameters are already encapsulated, -sending messages is often more efficient than parameter marshalling. -In addition, messages are easier to queue and store on the -receive side. - -The largest difference between parameter marshalling and messages -is that entry methods {\em keep} the messages passed to them. -Thus each entry method must be passed a {\em new} message. -On the receiving side, the entry method must either store the -passed message or explicitly {\em delete} it, or else the message -will never be destroyed, wasting memory. +sending messages is often more efficient than parameter marshalling, and +can help to avoid unnecessary copying. +Moreover, assume that the receiver is unable to process the contents of the +message at the time that it receives it. For example, consider a +tiled matrix multiplication program, wherein each chare receives an$A$-tile +and a$B$-tile before computing a partial result for$C = A \times B$. If we +were using parameter marshalled entry methods, a chare would have to copy the first +tile it received, in order to save it for when it has both the tiles it needs. +Then, upon receiving the second +tile, the chare would use the second tile and the first (saved) tile to +compute a partial result. However, using messages, we would just save a {\em pointer} +to the message encapsulating the tile received first, instead of the tile data itself. + +\vspace{0.1in} +\noindent +{\bf Managing the memory buffer associated with a message.} +As suggested in the example above, the biggest difference between marshalled parameters and messages +is that an entry method invocation is assumed to {\em keep} the message that it +is passed. That is, the \charmpp{} runtime system assumes that code in the body of the invoked +entry method will explicitly manage the memory associated with the message that it is passed. Therefore, +in order to avoid leaking memory, the body of an entry method must either \kw{delete} the message that +it is receives, or save a pointer to it, and \kw{delete} it a later point in the execution of the code. +%is code written for the body of an +%either store the passed message or explicitly {\em delete} it, or else the message +%will never be destroyed, wasting memory. + +Moreover, in the \charm{} execution model, once you pass a message buffer to the runtime system (via +an asynchronous entry method invocation), you should {\em not} reuse the buffer. That is, after you have +passed a message buffer into an asynchronous entry method invocation, you shouldn't +access its fields, or pass that same buffer into a second entry method invocation. Note that this rule +doesn't preclude the {\em single reuse} of an input message -- consider an entry method invocation +$i_1$, which receives as input the message buffer$m_1$. Then,$m_1$may be passed to an +asynchronous entry method invocation$i_2$. However, once$i_2$has been issued with$m_1$as its input +parameter,$m_1\$ cannot be used in any further entry method invocations.
+%message buffer, that message buffer may in turn be passed to an entry method invocation that accepts a
+%message of the same type. However,
+%Thus each entry method must be passed a {\em new} message.

Several kinds of message are available.
Regular \charmpp{} messages are objects of
\textit{fixed size}. One can have messages that contain pointers or variable
length arrays (arrays with sizes specified at runtime) and still have these
-pointers to be valid when messages are sent across processors, with some
+pointers as valid when messages are sent across processors, with some
additional coding.  Also available is a mechanism for assigning
-\textit{priorities} to messages that applies all kinds of messages.
+\textit{priorities} to a message regardless of its type.
A detailed discussion of priorities appears later in this section.

-Like all other entities involved in asynchronous method invocation, messages
-need to be declared in the {\tt .ci} file. In the {\tt .ci} file (the
-interface file), a message is declared as:
+\subsection{Message Types}
+
+\smallskip
+\noindent {\bf Fixed-Size Messages.}
+The simplest type of message is a {\em fixed-size} message. The size of each data member
+of such a message should be known at compile time. Therefore, such a message may encapsulate
+primitive data types, user-defined data types that {\em don't} maintain pointers to memory
+locations, and {\em static} arrays of the aforementioned types.
+
+\smallskip
+\noindent {\bf Variable-Size Messages.}
+%An ordinary message in \charmpp\ is a fixed size message that is allocated
+%internally with an envelope which encodes the size of the message.
+Very often,
+the size of the data contained in a message is not known until runtime.
+%One can
+%use packed\index{packed messages} messages to alleviate this problem.  However,
+%it requires multiple memory allocations (one for the message, and another for
+%the buffer.)
+For such scenarious, you can use variable-size (\emph{varsize}) messages.
+A {\em varsize} message can encapsulate several arrays,
+each of whose size is determined at run time.
+%In \emph{varsize} messages,
+The space required for these encapsulated, variable length arrays
+is allocated with the entire message comprises a
+contiguous buffer of memory.
+%message such that it is contiguous to the message.
+
+\smallskip
+\noindent {\bf Packed Messages.} A {\em packed} message is used to communicate non-linear
+data structures via messages. However, we defer a more detailed description of their use
+to \S~\ref{sec:messages/packed_msgs}.
+
+\subsection{Using Messages In Your Program}
+
+There are five steps to incorporating a (fixed or varsize) message type in your \charmpp{} program:
+(1) Declare message type in \kw{.ci} file; (2) Define message type in \kw{.h} file;
+(3) Allocate message; (4) Pass message to asynchronous entry method invocation and (5) Deallocate
+message to free associated memory resources.
+
+\medskip
+\noindent {\bf Declaring Your Message Type.}
+Like all other entities involved in asynchronous entry method invocation, messages
+must be declared in the {\tt .ci} file.
+This allows the \charmpp{} translator
+to generate support code for messages.
+Message declaration is straightforward for fixed-size messages. Given a
+message of type {\tt MyFixedSizeMsg}, simply include the following in the \kw{.ci} file:

\begin{alltt}
- message MessageType;
+ message MyFixedSizeMsg;
\end{alltt}

-%A message that contains variable length arrays is declared as:
-%
-%\begin{alltt}
-% message MessageType \{
-%   type1 var_name1[];
-%   type2 var_name2[];
-%   type3 var_name3[];
-%\};
-%\end{alltt}
-%
-If the name of the message class is \uw{MessageType}, the class must inherit
-publicly from a class whose name is \uw{CMessage\_MessageType}. This class
-is generated by the charm translator. Then message definition has the form:
+For varsize messages, the \kw{.ci} declaration must also include the names and
+types of the variable-length arrays that the message will encapsulate. The
+following example illustrates this requirement. In it, a message of type {\tt
+MyVarsizeMsg}, which encapsulates three variable-length arrays of different
+types, is declared:

\begin{alltt}
- class MessageType : public CMessage_MessageType \{
-    // List of data and function members as in \CC
+ message MyVarsizeMsg \{
+   int arr1[];
+   double arr2[];
+   MyPointerlessStruct arr3[];
\};
\end{alltt}

+\medskip
+\noindent {\bf Defining Your Message Type.}
+Once a message type has been declared to the \charmpp{} translator, its type definition must be provided.
+Your message type must inherit from a specific generated base class. If the type of
+your message is {\tt T}, then {\tt class T} must inherit from {\tt CMessage\_T}.
+This is true for both fixed and varsize messages.
+As an example, for our fixed size message
+type {\tt MyFixedSizeMsg} above, we might write the following in the \kw{.h} file:

-\subsubsection{Message Creation and Deletion}
+\begin{alltt}
+class MyFixedSizeMsg : public CMessage_MyFixedSizeMsg \{
+  int var1;
+  MyPointerlessStruct var2;
+  double arr3[10];

-\label{memory allocation}
+  // Normal C++ methods, constructors, etc. go here
+\};
+\end{alltt}

-\index{message}Messages are allocated using the \CC\ \kw{new} operator:
+In particular, note the inclusion of the static array of {\tt double}s, {\tt arr3}, whose size
+is known at compile time to be that of ten {\tt double}s.
+Similarly, for our example varsize message of type {\tt MyVarsizeMsg}, we would write something
+like:

\begin{alltt}
- MessageType *msgptr =
-  new [(int sz1, int sz2, ... , int priobits=0)] MessageType[(constructor arguments)];
+class MyVarsizeMsg : public CMessage_MyVarsizeMsg \{
+  // variable-length arrays
+  int *arr1;
+  double *arr2;
+  MyPointerlessStruct *arr3;
+
+  // members that are not variable-length arrays
+  int x,y;
+  double z;
+
+  // Normal C++ methods, constructors, etc. go here
+\};
\end{alltt}

-The optional arguments to the new operator are used when allocating messages
-with variable length arrays or \kw{prioritized} messages. \uw{sz1, sz2, ...}
-denote the size (in appropriate units) of the memory blocks that need to be
-allocated and assigned to the pointers that the message contains. The
-\uw{priobits} argument denotes the size of a bitfield (number of bits) that
-will be used to store the message priority.
+Note that the \kw{.h} definition of the class type must contain data members
+whose names and types match those specified in the \kw{.ci} declaration.  In
+addition, if any of data members are \kw{private} or \kw{protected}, it should
+declare class \uw{CMessage\_MyVarsizeMsg} to be a \kw{friend} class.  Finally,
+there are no limitations on the member methods of message classes, except that
+the message class may not redefine operators \texttt{new} or \texttt{delete}.

-For example, to allocate a message whose class declaration is:
+
+Thus the \uw{mtype} class
+declaration should be similar to:
+
+\medskip
+\noindent {\bf Creating a Message.}
+With the \kw{.ci} declaration and \kw{.h} definition in place, messages can be allocated and
+used in the program.
+\index{message}Messages are allocated using the \CC\ \kw{new} operator:

\begin{alltt}
-class Message : public CMessage_Message \{
-  // .. fixed size message
-  // .. data and method members
-\};
+ MessageType *msgptr =
+  new [(int sz1, int sz2, ... , int priobits=0)] MessageType[(constructor arguments)];
\end{alltt}

-do the following:
+The arguments enclosed within the square brackets are optional, and
+are used only when allocating messages
+with variable length arrays or prioritized messages.
+These arguments are not specified for fixed size messages.
+For instance, to allocate a message of our example message
+{\tt MyFixedSizeMsg}, we write:

\begin{alltt}
-Message *msg = new Message;
+MyFixedSizeMsg *msg = new MyFixedSizeMsg(<constructor args>);
\end{alltt}

-To allocate a message whose class declaration is:
+In order to allocate a varsize message, we must pass appropriate
+values to the arguments of the overloaded \kw{new} operator presented previously.
+Arguments \uw{sz1, sz2, ...}
+denote the size (in number of elements) of the memory blocks that need to be
+allocated and assigned to the pointers (variable-length arrays) that the message contains. The
+\uw{priobits} argument denotes the size of a bitvector (number of bits) that
+will be used to store the message priority.
+So, if we wanted to create {\tt MyVarsizeMsg} whose
+{\tt arr1},  {\tt arr2} and {\tt arr3} arrays contain
+10, 20 and 7 elements of their respective types, we would write:

\begin{alltt}
-class VarsizeMessage : public CMessage_VarsizeMessage \{
- public:
-  int *firstArray;
-  double *secondArray;
-\};
+MyVarsizeMsg *msg = new (10, 20, 7) MyVarsizeMsg(<constructor args>);
\end{alltt}

-do the following:
+%This allocates a \uw{VarsizeMessage}, in which \uw{firstArray} points to an
+%array of 10 ints and \uw{secondArray} points to an array of 20 doubles.  This
+%is explained in detail in later sections.
+
+Further, to add a 32-bit \index{priority}priority bitvector to this message, we would write:

\begin{alltt}
-VarsizeMessage *msg = new (10, 20) VarsizeMessage;
+MyVarsizeMsg *msg = new (10, 20, 7, sizeof(uint32_t)*8) VarsizeMessage;
\end{alltt}

-This allocates a \uw{VarsizeMessage}, in which \uw{firstArray} points to an
-array of 10 ints and \uw{secondArray} points to an array of 20 doubles.  This
-is explained in detail in later sections.
+Notice the last argument to the overloaded \kw{new} operator, which specifies
+the number of bits used to store message priority.
+The section on prioritized execution (\S~\ref{prioritized message passing}) describes how
+priorities can be employed in your program.

-To add a \index{priority}priority bitfield to this message,
+Another version of the overloaded \kw{new} operator allows you to pass in
+an array containing the size of each variable-length array, rather than specifying
+individual sizes as separate arguments.
+For example, we could create a message of type {\tt MyVarsizeMsg} in the following manner:

\begin{alltt}
-VarsizeMessage *msg = new (10, 20, sizeof(int)*8) VarsizeMessage;
+int sizes[3];
+sizes[0] = 10;               // arr1 will have 10 elements
+sizes[1] = 20;               // arr2 will have 20 elements
+sizes[2] = 7;                // arr3 will have 7 elements
+
+MyVarsizeMsg *msg = new(sizes, 0) MyVarsizeMsg(<constructor args>); // 0 priority bits
\end{alltt}

-Note, you must provide number of bits which is used to store the priority as
-the \uw{priobits} parameter. The section on prioritized execution describes how
-this bitfield is used.
+%In Section~\ref{message packing} we explain how messages can contain arbitrary
+%pointers, and how the validity of such pointers can be maintained across
+%processors in a distributed memory machine.

-In Section~\ref{message packing} we explain how messages can contain arbitrary
-pointers, and how the validity of such pointers can be maintained across
-processors in a distributed memory machine.

-When a message \index{message} is sent to a \index{chare}chare, the programmer
-relinquishes control of it; the space allocated to the message is freed by the
-system.  When a message is received at an entry point it is not freed by the
-runtime system.  It may be reused or deleted by the programmer.  Messages can
-be deleted using the standard \CC{} \kw{delete} operator.
+\medskip
+\noindent {\bf Sending a Message.}
+Once we have a properly allocated message,
+we can set the various elements of the encapsulated arrays in the following manner:

-There are no limitations of the methods of message classes except that the
-message class may not redefine operators \texttt{new} or \texttt{delete}.
+\begin{alltt}
+  msg->arr1[13] = 1;
+  msg->arr2[5] = 32.82;
+  msg->arr3[2] = MyPointerlessStruct();
+  // etc.
+\end{alltt}

+And pass it to an asynchronous entry method invocation, thereby sending it to the
+corresponding chare:

+\begin{alltt}
+myChareArray[someIndex].foo(msg);
+\end{alltt}
+
+When a message \index{message} is {\em sent}, i.e.  passed to an asynchronous
+entry method invocation, the programmer relinquishes control of it; the space
+allocated for the message is freed by the runtime system.  However, when a
+message is {\em received} at an entry point, it is {\em not} freed by the
+runtime system.  As mentioned at the start of this section, received
+messages may be reused or deleted by the programmer.  Finally, messages are
+deleted using the standard \CC{} \kw{delete} operator.
+
+\zap{
\subsubsection{Messages with Variable Length Arrays}

\label{varsize messages}
\index{variable size messages}
\index{varsize message}

-An ordinary message in \charmpp\ is a fixed size message that is allocated
-internally with an envelope which encodes the size of the message. Very often,
-the size of the data contained in a message is not known until runtime. One can
-use packed\index{packed messages} messages to alleviate this problem.  However,
-it requires multiple memory allocations (one for the message, and another for
-the buffer.) This can be avoided by making use of a \emph{varsize} message.
-In \emph{varsize} messages, the space required for these variable length arrays
-is allocated with the message such that it is contiguous to the message.
-
Such a message is declared as

\begin{alltt}
@@ -154,12 +288,7 @@ Such a message is declared as
\};
\end{alltt}

-in \charmpp\ interface file. The class \uw{mtype} has to inherit from
-\uw{CMessage\_mtype}. In addition, it has to contain variables of corresponding
-names pointing to appropriate types. If any of these variables (data members)
-are private or protected, it should declare class \uw{CMessage\_mtype} to be a
-friend'' class. Thus the \uw{mtype} class declaration should be similar to:
-
+in \charmpp\ interface file.
\begin{alltt}
class mtype : public CMessage_mtype \{
private:
@@ -231,24 +360,12 @@ p->firstArray[2] = 13;     // the arrays have already been allocated
p->secondArray[4] = 6.7;
\end{alltt}

-Another way of allocating a varsize message is to pass a \uw{sizes} in an array
-instead of the parameter list. For example,
-
-\begin{alltt}
-int sizes[2];
-sizes[0] = 4;               // firstArray will have 4 elements
-sizes[1] = 5;               // secondArray will have 5 elements
-VarsizeMessage* p = new(sizes, 0) VarsizeMessage;
-p->firstArray[2] = 13;     // the arrays have already been allocated
-p->secondArray[4] = 6.7;
-\end{alltt}
-
\hrule
\normalsize
-
No special handling is needed for deleting varsize messages.
+} % end zap

-\subsubsection{Message Packing}
+\subsection{Message Packing}

\label{message packing}
\index{message packing}
@@ -335,6 +452,7 @@ offsets from the address of the pointer variable to the start of the pointed-to
data.  Unpacking restores them to pointers.

\subsubsection{Custom Packed Messages}
+\label{sec:messages/packed_msgs}

\index{packed messages}

@@ -453,7 +571,7 @@ PackedMessage::unpack(void* inbuf)
int num_nodes;
memcpy(&num_nodes, buf, sizeof(int));
buf = buf + sizeof(int);
-  // allocate the message through charm kernel
+  // allocate the message through Charm RTS
PackedMessage* pmsg =
(PackedMessage*)CkAllocBuffer(inbuf, sizeof(PackedMessage));
// call "inplace" constructor of PackedMessage that calls constructor
index 9c715ff49e65fb619af7c13e9b3d4b24b86e6242..036a683ac027cee42af6bb76bda293b4e7797189 100644 (file)
-\subsection{Modules}
-
-\subsubsection{Structure of a \charmpp\ Program}
-
-A \charmpp\ program is structurally similar to a \CC{} program.  Most of a
-\charmpp\ program {\em is} \CC{} code.\footnote{\bf Constraint: The \CC{} code
-cannot, however, contain global or static variables.} The main syntactic units
-in a \charmpp\ program are class definitions. A \charmpp\ program can be
-distributed across several source code files.
-
-There are five disjoint categories of objects (classes) in \charmpp:
-
-\begin{itemize}
-\item Sequential objects: as in \CC{}
-\item Chares (concurrent objects) \index{chare}
-\item Chare Groups \index{chare groups} (a form of replicated objects)
-\index{group}
-\item Chare Arrays \index{chare arrays} (an indexed collection of chares)
-\index{array}
-\item Messages (communication objects)\index{message}
-\end{itemize}
-
-The user's code is written in \CC{} and interfaces with the \charmpp\ system as
-if it were a library containing base classes, functions, etc.  A translator is
-used to generate the special code needed to handle \charmpp\ constructs.  This
-translator generates \CC{} code that needs to be compiled with the user's code.
-
-Interfaces to the \charmpp\ objects (such as messages, chares, readonly
-variables etc.) \index{message}\index{chare}\index{readonly} have to be
-declared in \charmpp\ interface files. Typically, such entities are grouped
-\index{module} into {\em modules}. A \charmpp\ program may consist of multiple
-modules.  One of these modules is declared to be a \kw{mainmodule}. All the
-modules that are reachable'' from the \kw{mainmodule} via the \kw{extern}
-construct are included in a \charmpp\ program.
-
-The \charmpp\ interface file has the suffix .ci''.  The \charmpp\ interface
-translator parses this file and produces two files (with suffixes .decl.h''
-and .def.h'', {\em for each module declared in the .ci'' file}), that
-contain declarations (interface) and definitions (implementation)of various
-translator-generated entities. If the name of a module is \uw{MOD}, then the
-files produced by the \charmpp\ interface translator are named \uw{MOD.decl.h}
-and \uw{MOD.def.h}.\footnote{Note that the interface file for module \uw{MOD}
-need not be named \uw{MOD.ci}. Indeed one .ci'' file may contain interface
-declarations for multiple modules, and the translator will produce one pair of
-declaration and definition files for each module.}  We recommend that the
-declarations header file be included at the top of the header file (\uw{MOD.h})
-for module \uw{MOD}, and the definitions file be included at the bottom of the
-code for module (\uw{MOD.C}).\footnote{In the earlier version of interface
-translator, these files used to be suffixed with .top.h'' and .bot.h'' for
-this reason.}
-
-A simple \charmpp\ program is given below:
+A \charm program is essentially a \CC program where some components describe
+its parallel structure. Sequential code can be written using any programming
+technologies that cooperate with the \CC toolchain. This includes C and
+Fortran. Parallel entities in the user's code are written in \CC{}. These
+entities interact with the \charm framework via inherited classes and function
+calls.
+
+
+\section{.ci Files}
+\index{ci}
+All user program components that comprise its parallel interface (such as
+messages, chares, entry methods, etc.) are granted this elevated status by
+declaring or describing them in separate \emph{charm interface} description
+files. These files have a \emph{.ci} suffix and adopt a \CC-like declaration
+syntax with several additional keywords. In some declaration contexts, they
+may also contain some sequential \CC source code.
+%that is embedded unmodified into the generated code.
+\charm parses these interface descriptions and generates \CC code (base
+classes, utility classes, wrapper functions etc.) that facilitates the
+interaction of the user program's entities with the framework.  A program may
+have several interface description files.
+
+
+\section{Modules}
+\index{module}
+The top-level construct in a \ci file is a named container for interface
+declarations called a \kw{module}. Modules allow related declarations to be
+grouped together, and cause generated code for these declarations to be grouped
+into files named after the module. Modules cannot be nested, but each \ci file
+can have several modules. Modules are specified using the keyword \kw{module}.

\begin{alltt}
-///////////////////////////////////////
-// File: pgm.ci
-
-mainmodule Hello \{
-  mainchare HelloMain \{
-    entry HelloMain(); // implicit CkArgMsg * as argument
-    entry void PrintDone(void);
-  \};
-  group HelloGroup \{
-    entry HelloGroup(void);
-  \};
+module myFirstModule \{
+    // Parallel interface declarations go here
+    ...
\};
+\end{alltt}
+
+
+\section{Generated Files}
+\index{decl}\index{def}
+
+Each module present in a \ci file is parsed to generate two files. The basename
+of these files is the same as the name of the module and their suffixes are
+\emph{.decl.h} and \emph{.def.h}. For e.g., the module defined earlier will
+produce the files myFirstModule.decl.h'' and myFirstModule.def.h''. As the
+suffixes indicate, they contain the declarations and definitions respectively,
+of all the classes and functions that are generated based on the parallel
+interface description.
+
+We recommend that the header file containing the declarations (decl.h) be
+included at the top of the files that contain the declarations or definitions
+of the user program entities mentioned in the corresponding module. The def.h
+is not actually a header file because it contains definitions for the generated
+entities. To avoid multiple definition errors, it should be compiled into just
+one object file. A convention we find useful is to place the def.h file at the
+bottom of the source file (.C, .cpp, .cc etc.) which includes the definitions
+of the corresponding user program entities.
+
+\experimental
+It should be noted that the generated files have no dependence on the name of the \ci
+file, but only on the names of the modules. This can make automated dependency-based
+build systems slightly more complicated. We adopt some conventions to ease this process.
+This is described in~\ref{AppendixSectionDescribingPhilRamsWorkOnCi.stampAndCharmc-M}.
+
+
+\section{Module Dependencies}
+\index{extern}

-////////////////////////////////////////
-// File: pgm.h
-#include "Hello.decl.h" // Note: not pgm.decl.h
+A module may depend on the parallel entities declared in another module. It can
+express this dependency using the \kw{extern} keyword. \kw{extern}ed modules
+do not have to be present in the same \ci file.

-class HelloMain : public CBase_HelloMain \{
-  public:
-    HelloMain(CkArgMsg *);
-    void PrintDone(void);
-  private:
-    int count;
+\begin{alltt}
+module mySecondModule \{
+
+    // Entities in this module depend on those declared in another module
+    extern module myFirstModule;
+
+    // More parallel interface declarations
+    ...
\};
+\end{alltt}

-class HelloGroup: public Group \{
-  public:
-    HelloGroup(void);
+The \kw{extern} keyword places an include statement for the decl.h file of the
+\kw{extern}ed module in the generated code of the current module. Hence,
+decl.h files generated from \kw{extern}ed modules are required during the
+compilation of the source code for the current module. This is usually required
+anyway because of the dependencies between user program entities across the two
+modules.
+
+\section{The Main Module and Reachable Modules}
+\index{mainmodule}
+
+\charm software can contain several module definitions from several
+independently developed libraries / components. However, the user program must
+specify exactly one module as containing the starting point of the program's
+execution. This module is called the \kw{mainmodule}. Every \charm program
+has to contain precisely one \kw{mainmodule}.
+
+All modules that are reachable'' from the \kw{mainmodule} via a chain of
+\kw{extern}ed module dependencies are included in a \charm program. More
+precisely, during program execution, the \charm runtime system will recognize
+only the user program entities that are declared in reachable modules. The
+decl.h and def.h files may be generated for other modules, but the runtime
+system is not aware of entities declared in such unreachable modules.
+
+\begin{alltt}
+module A \{
+    ...
\};

-/////////////////////////////////////////
-// File: pgm.C
-#include "pgm.h"
+module B \{
+    extern module A;
+    ...
+\};

-CProxy_HelloMain mainProxy;
+module C \{
+    extern module A;
+    ...
+\};

-HelloMain::HelloMain(CkArgMsg *msg) \{
-  delete msg;
-  count = 0;
-  mainProxy = thisProxy;
-  CProxy_HelloGroup::ckNew(); // Create a new "HelloGroup"
-\}
+module D \{
+    extern module B;
+    ...
+\};
+
+module E \{
+    ...
+\};

-void HelloMain::PrintDone(void) \{
-  count++;
-  if (count == CkNumPes()) \{ // Wait for all group members to finish the printf
-    CkExit();
-  \}
-\}
+mainmodule M \{
+    extern module C;
+    extern module D;
+    // Only modules A, B, C and D are reachable and known to the runtime system
+    // Module E is unreachable via any chain of externed modules
+    ...
+\};
+\end{alltt}

-HelloGroup::HelloGroup(void) \{
-  ckout << "Hello World from processor " << CkMyPe() << endl;
-  mainProxy.PrintDone();
-\}

-#include "Hello.def.h" // Include the Charm++ object implementations
+\index{include}

-/////////////////////////////////////////
-// File: Makefile
+There can be occasions where code generated from the module definitions
+requires other declarations / definitions in the user program's sequential
+code. Usually, this can be achieved by placing such user code before the point
+of inclusion of the decl.h file. However, this can become laborious if the
+decl.h file has to included in several places. \charm supports the keyword
+\kw{include} in \ci files to permit the inclusion of any header directly into
+the generated decl.h files.

-pgm: pgm.ci pgm.h pgm.C
-      charmc -c pgm.ci
-      charmc -c pgm.C
-      charmc -o pgm pgm.o -language charm++
+\begin{alltt}
+module A \{
+    include "myUtilityClass.h"; //< Note the semicolon
+    // Interface declarations that depend on myUtilityClass
+    ...
+\};

+module B \{
+    include "someUserTypedefs.h";
+    // Interface declarations that require user typedefs
+    ...
+\};
+
+module C \{
+    extern module A;
+    extern module B;
+    // The user includes will be indirectly visible here too
+    ...
+\};
\end{alltt}

-\uw{HelloMain} is designated a \kw{mainchare}. Thus the Charm Kernel starts
-execution of this program by creating an instance of \uw{HelloMain} on
-processor 0. The HelloMain constructor creates a chare group
-\uw{HelloGroup}, and stores a handle to itself and returns. The call to
-create the group returns immediately after directing Charm Kernel to perform
-the actual creation and invocation.  Shortly after, the Charm Kernel will
-create an object of type \uw{HelloGroup} on each processor, and call its
-constructor. The constructor will then print Hello World...'' and then
-call the \uw{PrintDone} method of \uw{HelloMain}. The \uw{PrintDone} method
-calls \kw{CkExit} after all group members have called it (i.e., they have
-finished printing Hello World...''), and the \charmpp program exits.
-
-\subsubsection{Functions in the decl.h'' and def.h'' files}
-
-The \texttt{decl.h} file provides declarations for the proxy classes of the
-concurrent objects declared in the .ci'' file (from which the \texttt{decl.h}
-file is generated). So the \uw{Hello.decl.h} file will have the declaration of
-the class CProxy\_HelloMain. Similarly it will also have the declaration for
-the HelloGroup class.
-
-This class will have functions to create new instances of the chares and
-groups, like the function \kw{ckNew}. For \uw{HelloGroup} this function creates
-an instance of the class \uw{HelloGroup} on all the processors.
-
-The proxy class also has functions corresponding to the entry methods defined
-in the .ci'' file. In the above program the method wait is declared in
-\uw{CProxy\_HelloMain} (proxy class for \uw{HelloMain}).
-
-The proxy class also provides static registration functions used by the
-\charmpp{} runtime.  The \texttt{def.h} file has a registration function
-(\uw{\_\_registerHello} in the above program) which calls all the registration
-functions corresponding to the readonly variables and entry methods declared in
-the module.
+
+\section{The main() function}
+
+The \charmpp framework implements its own main function and retains control
+until the parallel execution environment is initialized and ready for executing
+user code. Hence, the user program must not define a \emph{main()} function.
+Control enters the user code via the \kw{mainchare} of the \kw{mainmodule}.
+This will be discussed in further detail in~\ref{mainchare}.
+
+Using the facilities described thus far, the parallel interface declarations
+for a \charm program can be spread across multiple ci files and multiple
+modules, permitting good control over the grouping and export of parallel API.
+This aids the encapsulation of parallel software.
+
+\section{Compiling \charm Programs}
+\index{charmc}
+
+\charm provides a compiler-wrapper called \kw{charmc} that handles all \ci, C,
+\CC and fortran source files that are part of a user program. Users can invoke
+charmc to parse their interface descriptions, compile source code and link
+objects into binaries. It also links against the appropriate set of charm
+framework objects and libraries while producing a binary. \kw{charmc} and its functionality
+is described in~\ref{sec:compile}.

diff --git a/doc/charm++/mpi-interop.tex b/doc/charm++/mpi-interop.tex
new file mode 100644 (file)
index 0000000..b34442b
--- /dev/null
@@ -0,0 +1,100 @@
+Libraries written in \charmpp{} can also be used with pure MPI programs. Currently this
+functionality is supported only if \charmpp{} is built using MPI as the network layer
+(e.g. mpi-linux-x86\_64 build). An example program to demonstrate the
+interoperation is available in examples/charm++/mpi-coexist. We will be
+referring to this example program for ease of understanding.
+
+\section{Control Flow and Memory Structure}
+The control flow and memory structure of a \charmpp{}-MPI interoperable program  is
+similar to that of a pure MPI program that uses external MPI libraries. The
+execution of program begins with pure MPI code's {\em main}. At some point after
+MPI\_Init() has been invoked, the following function call should be made to initialize
+\charmpp{}: \\
+
+{\bf void CharmLibInit(MPI\_Comm newComm, int argc, char **argv)}\\
+
+\noindent Here, {\em newComm} is the MPI communicator that \charmpp{} will use for
+the setup and communication. All the MPI ranks that belong to {\em newComm} should
+make this call. A collection of MPI ranks that make the CharmLibInit call defines a
+new \charmpp{} instance. Different MPI ranks that belong to different communicators can
+make this call independently, and separate \charmpp{} instances (that are not aware of each other)
+will be created. As of now, a particular MPI rank can only be part of one unique \charmpp{}
+instance. Arguments {\em argc and argv} should contain the information required by \charmpp{}
+such as the load balancing strategy etc.
+
+During the initialization the control is transferred from MPI to \charmpp{}
+RTS on the MPI ranks that made&nbs