Thank you for your comment.
I am using environments, and I suspect that that might have something to do with it, as deleting the environment and recreating resolves the issue (but such seems a bit drastic).
I am familiar with the spack behavior that
spack --env foo install
will attempt to install all of the root specs listed in the spack.yaml file for the foo environment.
But I do not believe this is the cause of the problem I am seeing, because in my past experience (admittedly, mainly with sp...@0.16)
i) while it is true 'spack install' in an environment will attempt to install all root specs for the environment if no spec is provided on the command line, it does *not* try to install all root specs if a spec is given on the command line.
ii) when a spec is given to the spack install on the command line, it will add the spec to the root specs for the environment (I _think_ this is only done after a successful install, but I am not certain). But in my experience the spec added is the spec given on the command line, not the fully concretized spec.
I think I checked the spack.yaml for such before my initial posting, but I deleted and recreated the environment which had the issue with attempting to install arpack-ng with cray-mpich before reading your response, so I cannot confirm. (Again, deleting and recreating the environment "fixed" the issue, but that seems drastic to me.) In that case, I never explicitly requested cray-mpich; that was done by concretizer (because I erroneously set all other MPIs to not be buildable in the packages hash for the environment). But even after I fixed the packages hash, and actually successfully installed arpack-ng with openmpi, every time I try to do a spack install in the environment, after installing the requested package, spack tries again to install the failed package, using the erroneous settings which are no longer in the spack.yaml file. These leads to a lot of extraneous error messages in spack output, which is annoying and can hide real errors if the desired build was unsuccessful.
And this is not an one-time fluke event. Just now I had a package install fail because the package was attempting to use the system installed rather than the spack installed python (I believe this is an error in the spack recipe, but that is a separate matter). In an attempt to fix the issue, I modified the package.py for the problematic package adding an explicit python dependency, which changed the hash (and seems to have resolved the installation issue). But even though I "fixed" the issue and successfully installed the package in question, whenever I do a spack install in that environment, after installing the packages I specify on the command line, it then proceeds to try to install the problematic package, using the system python despite the fact that the package.py for the package now explicitly depends on the spack-installed python. I.e., because I editted the package.py for the package, I don't think I can even write a spec for the package that spack is now trying to install, but it is still trying to install it. This happens even if I give a spack install command for a package that is already installed (it recognized the given package as already installed, then proceeds to install the failed package).
I expect it is something in the spack.lock or in something under the .spack-env in the environment, but those aren't well-documented and I suspect are not meant for end user consumption