Skip to content

MPI Support#38

Open
garcs2 wants to merge 7 commits intoIdahoLabResearch:mainfrom
garcs2:MPI_Enhancement
Open

MPI Support#38
garcs2 wants to merge 7 commits intoIdahoLabResearch:mainfrom
garcs2:MPI_Enhancement

Conversation

@garcs2
Copy link

@garcs2 garcs2 commented Nov 20, 2025

Resolves #36, mpi support enabled for each example. Example shell script for job submission in INL HPC also provided.

@garcs2
Copy link
Author

garcs2 commented Nov 25, 2025

I should comment that there is an issue stemming from what I mentioned in #41. We should consider this PR incomplete at this point.

@garcs2
Copy link
Author

garcs2 commented Dec 18, 2025

This issue has actually been resolved, it turns out that #41 was actually a non-issue, it's just that the ordering of print statements when toggling show_stdout = True made some statements print out of order.

@garcs2
Copy link
Author

garcs2 commented Jan 26, 2026

Ready for review @BotrosHanna-INL.

@BotrosHanna-INL
Copy link
Collaborator

Hi Sam,

  • First, I want to apologize for the delayed review — this PR has been open since November and you deserved feedback much sooner than this. I appreciate your patience, and I genuinely appreciate the work you put into this.

  • I tested the PR locally by applying your changes to the current version of the MOUSE codebase and running the LTMR example. A few observations I'd like to share:

  • When I ran the LTMR example with SD Margin Calc = True and Isothermal Temperature Coefficients = True, the code crashed on the second OpenMC call with RuntimeError: OpenMC aborted unexpectedly. This appears to be because run_openmc launches multiple sequential mpirun subprocesses within the same job allocation, which seems to fail on our HPC environment (my guess that you did not have any issues on your side because you did not run depletion analysis?)

+I want to raise a question about the performance benefit for depletion cases. In run_depletion_analysis, openmc.run(mpi_args=mpi_args) only covers the single initial k-eigenvalue calculation. The 17 transport steps inside integrator.integrate() (called via CoupledOperator) run through OpenMC's internal Python API and bypass openmc.run() entirely — so those would remain serial regardless.

  • I think the main question for me is whether merging this PR would help to run the code faster. When I ran the current LTMR example(https://github.com/IdahoLabResearch/MOUSE/blob/main/examples/watts_exec_LTMR.py), it took around 30–33 minutes on the INL HPC (see the attached log file) before merging your PR. I'm curious whether you observed a meaningful speedup when running the same LTMR example

Thank you for your contribution

Botros
run.log

@garcs2
Copy link
Author

garcs2 commented Feb 24, 2026

Hi Botros!

I was able to get the depletion to run without the code crashing though these were only tested with SD Margin Calc = False and ITC = False. I found that the environment setup to get parallelization was finicky and wouldn't consistently work sometime after mid January, though I figured that was an issue on my end. I see now that this likely has to do with the different ways openmc.run() and integrate.integrate() function.

Regarding your question about the code running faster. It seems that the passing MPI args is useful insofar as the initial k-eff calculation, but then the calculation becomes bottle-necked from the depletion which I found to be the case as well. I think I'd like to make a better solution for this, by essentially breaking the problem up into two steps, where openmc.run(mpi_args=mpi_args) is done first, and then depletion is done as a separate script so as to allow for the python script to be ran with mpiexec depletion_run.py or something similar.

Separately, I'm also wondering what the purpose of the initial openmc.run() is for, since the model is generated by the watts plugin. It could mean that an easy MPI parallelization would be to remove the openmc.run() so that the example scripts can be ran with srun or mpiexec (perhaps this is what you already do).

Thanks for the feedback!

Best,
Sam

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add HPC cluster support of OpenMC runs with MPI execution

2 participants