Home › Forum › SOFA › Programming with SOFA › [SOLVED] Implement our own LinearSolver
Tagged: linearsolver, Linux_other, SOFA_other, vecId
- This topic has 4 replies, 2 voices, and was last updated 3 years, 5 months ago by Hugo.
-
AuthorPosts
-
7 May 2021 at 01:37 #19390jjcasmarBlocked
I want to implement my own linear solver for using it as a preconditioner with the ShewchukPCGLinearSolver. My solver is pretty simple, is just a diagonal matrix which I have as an Eigen::DiagonalMatrix.
The LinearSolver API provides 3 functions which I think I need to use:
/// Set the linear system right-hand term vector, from the values contained in the (Mechanical/Physical)State objects virtual void setSystemRHVector(core::MultiVecDerivId v) = 0; /// Set the initial estimate of the linear system left-hand term vector, from the values contained in the (Mechanical/Physical)State objects /// This vector will be replaced by the solution of the system once solveSystem is called virtual void setSystemLHVector(core::MultiVecDerivId v) = 0; /// Solve the system as constructed using the previous methods virtual void solveSystem() = 0;
There are a couple more for creating the MBK and reseting the solver, but I dont need them, since I am planning to load my matrix from disk on the init function.
However, I find very difficult to implement my solver with this API. If I understand correctly, I need to provide the solution on the LH vector. This is a MultiVecDerivId vector, and I have absolutely no idea how to write on it. Of course, I also need to read from the RH vector, which is, again, a MultiVecDerivId.
I understand this vectors are a scattered vector that specify how to read/write to the vector states, but I haven’t found anywhere where it says how to do this.
How can I create an Eigen::VectorXd from the MultiVecDerivId so that I can multiply it by my Eigen::DiagonalMatrix?
7 May 2021 at 23:21 #19406HugoKeymasterHi @jjcasmar
As said on Gitter, you can give a look at Jean Nicolas’ work in Caribou : https://github.com/jnbrunet/caribou/tree/master/src/SofaCaribou/Solver
I think it is pretty close to what you are looking for.
Could you keep us posted about your journey?Best
Hugo
8 May 2021 at 00:38 #19410jjcasmarBlockedI have been looking at the code from Caribou, as well as the code from ShewchukPCGLinearSolver and I have posted a possible bug on the github tracker. If I understand the code correctly, there are some differences with the algorithm described in https://www.cs.ubc.ca/~ascher/papers/ab.pdf
When I have that clear, I will have a better look on how to implement a custom LinearSolver for my preconditioner.
One thing that is a bit unclear right now is that the EulerImplicitSolver requires a LinearSolver and it uses the first one it finds in the tree. If that one is a ShewchukPCGLinearSolver, it is possible there is another LinearSolver on the same node, no? Therefore, it is important to keep the one use for the preconditioner the second one in the node so the EulerImplicitSolver doesn’t find it, no?
11 May 2021 at 06:12 #19426jjcasmarBlockedI have kind of solve this. I have reimplemented the CG solver using the one in Caribou. Based on that, I have kept only the implementation that doesn’t build the matrix, but added some calls to assemble the vector that I had to project after preconditioning according to the paper linked above.
I copy paste the implementation here, and when I have time, Ill try to push upstream.
void ConjugateGradientSolver::solve(sofa::core::behavior::MultiVecDeriv &b, sofa::core::behavior::MultiVecDeriv &x) { sofa::simulation::common::VectorOperations vop(&m_mparams, this->getContext()); sofa::simulation::common::MechanicalOperations mop(&m_mparams, this->getContext()); // Create temporary vectors needed for the method sofa::core::behavior::MultiVecDeriv r(&vop); sofa::core::behavior::MultiVecDeriv p(&vop); sofa::core::behavior::MultiVecDeriv s(&vop); sofa::core::behavior::MultiVecDeriv h(&vop); // Get the method parameters const auto &maximum_number_of_iterations = m_iterations; const auto &residual_tolerance_threshold = m_toleranceThreshold; m_squaredResiduals.clear(); m_squaredResiduals.reserve(m_iterations); // Get the matrices coefficient m, b and k : A = (mM + bB + kK) const auto m_coef = m_mparams.mFactor(); const auto b_coef = m_mparams.bFactor(); const auto k_coef = m_mparams.kFactor(); // Declare the method variables VNCS::Real b_delta; VNCS::Real delta; // RHS and residual squared norms VNCS::Real rho0, rho1 = 0.; // Stores r*r as it is used two times per iterations VNCS::Real alpha, beta; // Alpha and Beta coefficients VNCS::Real threshold; // Residual threshold int currentIteration = 0; // Current iteration number bool converged = false; sofa::component::linearsolver::DefaultMultiMatrixAccessor accessor; mop.getMatrixDimension(nullptr, nullptr, &accessor); const auto n = static_cast<Eigen::Index>(accessor.getGlobalDimension()); if (m_preconditioner) { Eigen::VectorXd bEigen; bEigen.resize(n); sofa::component::linearsolver::EigenVectorWrapper<VNCS::Real> bEigenWrapped(bEigen); mop.multiVector2BaseVector(b.id(), &bEigenWrapped, &accessor); b_delta = bEigen.transpose() * m_preconditioner.value() * bEigen; } else { b_delta = b.dot(b); } threshold = residual_tolerance_threshold * residual_tolerance_threshold * b_delta; // INITIAL RESIDUAL // Do the A*x(0) with visitors since we did not construct the matrix A mop.propagateDxAndResetDf(x, s); // Set s = 0 and calls applyJ(x) on every mechanical mappings mop.addMBKdx(s, m_coef, b_coef, k_coef, false); // s = (m M + b B + k K) x // Do the projection of the result in the constrained space mop.projectResponse(s); // BaseProjectiveConstraintSet::projectResponse(q) // Finally, compute the initial residual r = b - A*x r.eq(b, s, -1.0); // r = b - A*x Eigen::VectorXd rEigen; sofa::component::linearsolver::EigenVectorWrapper<VNCS::Real> rEigenWrapped(rEigen); if (m_preconditioner) { rEigen.resize(n); mop.multiVector2BaseVector(r.id(), &rEigenWrapped, &accessor); rEigen = m_preconditioner.value() * rEigen; // It doesnt alias because m_preconditioner is diagonal mop.baseVector2MultiVector(&rEigenWrapped, p.id(), &accessor); // Save it directly in p, (p = P * r) mop.projectResponse(p); // p = S * P * r } else { p.eq(r); } delta = r.dot(p); // ITERATIONS while (delta > threshold && currentIteration < maximum_number_of_iterations) { // 1. Computes q(k+1) = A*p(k) mop.propagateDxAndResetDf(p, s); // Set s = 0 and calls applyJ(p) on every mechanical mappings mop.addMBKdx(s, m_coef, b_coef, k_coef, false); // s = (m M + b B + k K) p // We need to project the residual in the constrained space since the constraints haven't been added to the // matrix (the matrix is never constructed) and addMBKdx of the forcefields do not take constraints into // account. mop.projectResponse(s); // BaseProjectiveConstraintSet::projectResponse(q) // 2. Computes x(k+1) and r(k+1) alpha = delta / p.dot(s); x.peq(p, alpha); // x = x + alpha*p r.peq(s, -alpha); // r = r - alpha*q if (m_preconditioner) { mop.multiVector2BaseVector(r.id(), &rEigenWrapped, &accessor); rEigen = m_preconditioner.value() * rEigen; mop.baseVector2MultiVector(&rEigenWrapped, h.id(), &accessor); // Save it directly in h, (h = P * r) } else { h.eq(r); } // 3. Computes the new residual norm VNCS::Real deltaOld = delta; delta = r.dot(h); // 6. Compute the next search direction p.eq(h, p, delta / deltaOld); // p = r + beta*p mop.projectResponse(p); ++currentIteration; } auto log = spdlog::get("VNCS"); if (log) { log->info("Conjugate gradient finished"); log->info("Iterations = {}/{}", currentIteration, maximum_number_of_iterations); log->info("Error = {:.4e}/{:.4e}", delta, threshold); } }
4 July 2021 at 09:20 #19919HugoKeymasterThanks a lot for sharing JuanJo!
Best
Hugo
-
AuthorPosts
- You must be logged in to reply to this topic.