Prior work on Krylov subspace methods?


#1

There are lots of nice methods to solve systems of linear equations using Krylov subspaces, for example BiCGSTAB or GMRES. I noticed, this was briefly mentioned in @gnzlbg’s wishlist for HPC, so I thought it would be good to reach out to the community before jumping straight into implementation. In my field, people seem to be using PETSc, an MPI-based C library of all sorts of HPC algorithms, for this purpose.

Has anyone been working on this? I have not found anything open-sourced yet anyway. I will probably work on one of these algorithms in a generic fashion, so that they can be used with different matrix implementations. I am thinking of sparse matrices, matrices distributed across different nodes, etc.


#2

I’m currently playing around with some solver implementations too. I implemented CG, preconditioned CG, BiCGStab, preconditioned BiCGStab and a simple Two-Grid-Method as preconditioner. Most of it should already work. However, I’m still experimenting around how to structure everything and some things a still a bit unsound.
My aim is a small library which can combine solvers with different preconditioners for different matrix types that allows one to easily test some things. I didn’t put much thought in parallelization yet and was hoping to add it afterwards.

I didn’t upload it yet, but if you are interested I am happy to upload it to github later :slight_smile:


#3

I would be very interested. By now I started implementing a BiCGStab as well. Your aim is well-aligned with what I had in mind, so I would be happy to contribute, once you upload it to GitHub.

What matrices are you using? I have started with nalgebra for sake of simplicity and wanted to abstract it such that I can use sprs, too.

I think that parallelization could easily be deferred to the matrix/vector operations. That way, you could simply use a parallelized matrix implementation. That also opens up the option for different parallelization strategies, say multi-threading, distributed computing or even offloading operations to the GPU.

pinging @sebcrozet, @vbarrielle

What do you think about a common abstraction for your libraries? I am thinking of alga as a good candidate for capturing the algebraic structure. It would have to be implemented for sprs.


#4

Hello, quick reply since I’m on phone right now.

I’ll have to look more closely, but I think alga’s traits for matrices are a bit too specific for now.

I’m not sure for all subspace methods, but at least for CG the only thing that’s required besides the shape of the matrix is the matrix vector operation (possibly transposed), so that could be some trait. I’d gladly implement such a trait.

As for parallel matrix vector product I don’t have that yet in sprs but I should look into it at some point in the future.


#5

So I just uploaded what I did so far, but be warned, it is still quite messy.


I tried to abstact a bit over matrix and vector types, but I’m not relly happy with my solution yet. But it works with nalgebra (i.e. dense) matrices and 3x3 Stencils. All solver just use various matrix-vector methods from the Matrix Vector traits.
If you want to get a feeling for what currently works, have a look at the #[test]s.

I’ll have a look on alga in the next few days.


#6

just fyi, I also just uploaded my code to Github. It has minimal functionality compared to @AlmostKoala’s version. It uses alga for abstraction though, which I hope may be helpful as a guideline.