From 12/06/2018 to 01/04/2019
- 04:35 PM Bug #2026 (Implemented): charm-6.8.2 breaks linking process of NAMD if it's patched with shared v...
- 03:39 PM Bug #2026: charm-6.8.2 breaks linking process of NAMD if it's patched with shared version of PLUMED
- How's this? https://charm.cs.illinois.edu/gerrit/c/charm/+/4884
- 02:06 PM Bug #2026: charm-6.8.2 breaks linking process of NAMD if it's patched with shared version of PLUMED
- 05:14 PM CharmDebug Bug #1942: CkStartQD never triggered, even though all all entry methods have returned.
- Is there a short .ci and .C example of this problem?
- 03:16 PM Support #2041: charmrun with mpirun instead of srun?
- For the mpi-... builds of Charm++ you can run the binary with mpirun/mpiexec/srun directly rather than via charmrun. ...
- 12:46 PM Support #2041: charmrun with mpirun instead of srun?
- No, it's not. What MPI library are you using? And could you post the output of what happens when you try the "++mpiex...
- 11:08 AM Support #2041: charmrun with mpirun instead of srun?
- It's v6.8.0. We're working on adding support for v6.9.0 to our code base (https://github.com/sxs-collaboration/spectr...
- 12:38 PM Feature #2042: Add [whenidle] to simplify speculative execution
- Basic pattern (without templates or dependencies on more than C based trickery) is:...
- 12:21 PM Feature #2042: Add [whenidle] to simplify speculative execution
- If you have an example of the ugly Ccd code to do this, could you post it here? I anticipate needing to implement thi...
- 10:49 AM Feature #2042 (New): Add [whenidle] to simplify speculative execution
- There are various cases where some computation can be done in the absence of other work, but would be of a low priori...
- 04:37 PM Support #2041: charmrun with mpirun instead of srun?
- Is that v6.8.0 or v6.8.2? Can you try with v6.9.0? You shouldn't need to build Charm++ any differently.
- 07:07 AM Support #2041: charmrun with mpirun instead of srun?
- I’m using charm++ v6.8.
- 06:47 AM Support #2041: charmrun with mpirun instead of srun?
- I tried this, and it still called srun. Might I need to build charm++ differently? I used
./build charm++ mpi-lin...
- 11:25 AM Support #2041: charmrun with mpirun instead of srun?
- Can you try with this: ...
- 08:34 AM Support #2041 (Feedback): charmrun with mpirun instead of srun?
I need to use charm++ on a cluster that runs parallel jobs using mpirun. The cluster uses slurm to submit jobs...
- 07:13 PM Documentation #1302 (Merged): pass arbitrary arguments to mpiexec via charmrun
- 06:30 PM Bug #1165 (Resolved): avoid -lm with Intel compiler
- First-match of libraries on the link line is my understanding.
I think the math library in use can be determined b...
- 04:55 PM Bug #1165: avoid -lm with Intel compiler
- It's really hard to tell. Does the -lm get inserted before or after the user-provided libraries? Assuming first-mat...
- 01:50 PM Bug #1165: avoid -lm with Intel compiler
- Does the presence of @-lm@ on the command line interfere with @-limf@? If not, I'll drop this issue and the patch I s...
- 01:05 PM Bug #1165: avoid -lm with Intel compiler
- This may have been old advice. Intel's meager docs say to use mathifm.h and even then I need -limf to get any Intel ...
- 02:29 PM Bug #2040 (New): pamilrts machine layer is less performant than pami machine layer
- It has been noted that the non-LRTS pami layer runs faster than pamilrts, which is why we are keeping it around inste...
- 02:16 PM Documentation #1302 (Implemented): pass arbitrary arguments to mpiexec via charmrun
- 01:17 PM Documentation #1302: pass arbitrary arguments to mpiexec via charmrun
- Yes, that does work.
Please add to the docs at http://charm.cs.illinois.edu/manuals/html/charm++/C.html#mpiexec
- 01:51 PM Bug #1332 (Closed): assumes remote shell is OpenSSH
- That works for me.
- 01:23 PM Bug #1332: assumes remote shell is OpenSSH
- I don't think this is worth trying to fix as it's easy enough to do a wrapper script or use ++mpiexec if the user ins...
- 01:41 PM Cleanup #617: Rename windows commands to not reference Windows NT
- While the operating system as a whole no longer contains NT in its name, I believe the kernel is still called the "NT...
- 04:48 PM Bug #2032 (Merged): Broken builds from C -> C++ conversion
- 03:54 PM Bug #1922 (Merged): Isomalloc fails with large memory footprints when using the GNI mempool
- 03:54 PM Feature #1921 (Merged): Make Isomalloc/the GNI mempool not use the pool for large allocations
- 02:13 PM Cleanup #1980 (Merged): Remove the old RDMA API (CkDirect/CmiDirect) from source code and libraries
- 04:36 PM Feature #2038 (New): Design a Many to Many API on the Zerocopy API
- The previous Many to Many API was unused and removed in the refactoring effort. (https://charm.cs.illinois.edu/gerrit...
- 04:29 PM Bug #2037 (Implemented): Avoid redundant declaration of callbacks as "inline" and have callbacks ...
- Fix: https://charm.cs.illinois.edu/gerrit/#/c/charm/+/4857/
- 04:15 PM Feature #1468 (Merged): Enable pre-pinning memory for the zero-copy message sends through the Ent...
- 04:15 PM Feature #1657 (Merged): CMA support for nocopy sends using the Entry Method API across processes ...
- 02:59 PM Bug #2037 (Merged): Avoid redundant declaration of callbacks as "inline" and have callbacks to in...
- Currently, it is required to declare callbacks as inline, even if the entry method is an inline entry method. ...
- 11:50 AM Feature #954 (Merged): Update AMPI's version of MPICH test suite
- 03:13 PM Documentation #1768: document CkIO
- Having real documentation of how to use this API is still highly desirable.
- 03:11 PM Bug #81 (Rejected): Manytomany on PAMI SMP hangs without Async
- 03:11 PM Bug #81: Manytomany on PAMI SMP hangs without Async
- Old CkDirect based version slated for removal. PAMI support for this appears to be dead. Basic idea needs reconstru...
Also available in: Atom