+setcpuaffinity ignores cpuset
If I run, e.g., numactl --physcpubind=20-23 numactl --physcpubind=8-11 true
I get a warning from the second numactl that those CPUs aren't available:
libnuma: Warning: cpu argument 8-11 is out of range
The +pemap and +commap options ignore the existing cpuset and do as they are told without warning, which is probably OK, but a warning would be nice.
The +setcpuaffinity option, however, also ignores the current cpuset and picks cores starting at 0, which means that the user has to parse numctl output to build an explicit pemap in order to get cpu affinity.
#1 Updated by Eric Bohm over 3 years ago
What is the interoperability goal here for using both numactl, cpuset, and various Charm runtime binding options?
We're in the process of designing a refactoring of the whole thing and it would be helpful to know why numactl is being used in addition to Charm's internal binding schemes. The general assumption in an HPC context is that the user would like the runtime system to make the best use of available hardware. However, there may be important interoperability cases where some resources are used by some external thing and the choices reflected in cpuset need to be respected.
How important is it for us to design and test for external binding as a constraint for Charm++ to operate within, rather than override?