At the moment PBS does not supports mpi jobs, but we can add this feature, if you can help us. We do not have access to PBS resource, which supports mpi jobs.
Will it work, if we add to the PBS start script the line:
#PBS -l nodes=xx
and we start the executable with:
mpirun -np xx ./execute.bin
?
Thanks
Last edit: imi 2013-09-10
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
At the moment PBS does not supports mpi jobs, but we can add this feature,
if you can help us. We do not have access to PBS resource, which supports
mpi jobs.
Will it work, if we add to the PBS start script the line:
PBS -l nodes=xx
and we start the executable with:
mpirun -np xx ./execute.bin
At the moment PBS does not supports mpi jobs, but we can add this feature, if you can help us. We do not have access to PBS resource, which supports mpi jobs.
Will it work, if we add to the PBS start script the line:
#PBS -l nodes=xx
and we start the executable with:
mpirun -np xx ./execute.bin
?
Thanks
Last edit: imi 2013-09-10
Hi,
This is what I did to the source code:
LinuxWrapperForGrids w = null;
try {
w = new LinuxWrapperForGrids(path);
pJob.getConfiguredResource().getResource() + " \n"); //// in this case
resource is job queue
}
w.writeln("#PBS -W stagein=localinputs" + pJob.getId() + ".tgz@" +
host + ":$HOME/" + SSHDIR + pJob.getId() + "/localinputs.tgz \n"
+ "#PBS -W stageout=" + pJob.getId() + "@" + host +
":$HOME/" + SSHDIR + " \n"
//+ "stderr.log." + pJob.getId() + "@" + host + ":$HOME/" +
SSHDIR + pJob.getId() + "/stderr.log,"
//+ "stdout.log." + pJob.getId() + "@" + host + ":$HOME/" +
SSHDIR + pJob.getId() + "/stdout.log \n"
+ "#PBS -e stderr.log." + pJob.getId() + " \n"
+ "#PBS -o stdout.log." + pJob.getId() + " \n");
if (BinaryHandler.isMPI(jsdl)){
int nodenumber =0;
try {
nodenumber =
(int)jsdl.getJobDescription().getResources().getTotalCPUCount().getUpperBoundedRange().getValue();
} catch (Exception e) {
e.printStackTrace();
}
w.writeln("#PBS -l ncpus="+nodenumber+" \n");
}
I tested it works properly.
I do not call/add the function below:
w.runMPI(binname, params, stdOut, stdErr, "" + nodenumber);
The reason is to give flexibility to the workflow developer to code their
script with the cluster environment.
Regards
On Tue, Sep 10, 2013 at 5:08 PM, imi i-mi@users.sf.net wrote:
--
Muhammad Farhan Sjaugi, S.Kom. M.Sc
email: fhn@cbcommunity.or.id
Related
Feature Requests: #56
Thank You!
The next version will contain "#PBS -l ncpus=", but not the mpirun .
I agree. I think ncpus is more appropriate since the job scheduler can
easily assign the cpus across the cluster.
Somemore, since the introduction of multicore processors, one node is not
longer identical with one cpu, hence number
of cpus is more correct.
Regards
On Tue, Sep 10, 2013 at 6:58 PM, imi i-mi@users.sf.net wrote:
--
Muhammad Farhan Sjaugi, S.Kom. M.Sc
email: fhn@cbcommunity.or.id
Related
Feature Requests: #56
Released in 3.6.0