Maximum product transversal + block pivoting instead of pivoting by maximum element.
Member INMOST::AbstractMatrixReadOnly< Var >::SVD (AbstractMatrix< Var > &U, AbstractMatrix< Var > &Sigma, AbstractMatrix< Var > &V, bool order_singular_values=true, bool nonnegative=true) const
Different types of operators: time-stepping, local point-wise (curl,grad on element), global integrators (div,curl on domain), interpolators, inter-mesh interpolators. Each has its own functions. Implementation should be flexible enough to prevent limitation.
Ultimately operators should stack together: for staggered incompressible navier-stokes: Time(nU) + Projection(Divergence(ConvectionDiffusion(nU,\mu,Reconstruction(nU)))) - Grad(P) = f Divergence(nU) = 0
One should thoroughly check three scenarios of function execution in shared parallel environment for different types of cells (simple tet/hex cells as well as complex polyhedral cells) and draw a conclusion on the best scenario for each condition. One of the development versions contains all the algorithms, ask for the files.
Use of markers (current variant).
Put all elements into array with duplications, then run std::sort and std::unique.
Put all elements into array, check for duplication by running through array.
The algorithm inside is minimizing the size of the adjacency graph for each new cell. The correct behavior is to calculate volume of the cell for each adjacency graph and choose the graph with minimal volume. This requires calculation of volume for non-convex cells. For correct calculation of volume on non-convex cells one should find one face for which normal orientation can be clearly determined and then orient all edges of the cell with respect to the orientation of edges of this face and establish normals for all faces. Once the algorithm is implemented here it should be implemented in geometrical services or vice verse.
Probably the algorithm should minimize the volume and adjacency graph size altogether. Between the cells with smallest volume within some tolerance select those that have smallest adjacency graph.
If other and current sets are sorted in same way, may perform narrowing traversal by retrieving mutual lower_bound/higher_bound O(log(n)) operations for detecting common subsets in sorted sets. May work good when deleting handles by small chunks, ApplyModification may greatly benefit.
expression templates for operations (???) how to for multiplication? efficient multiplication would require all the matrix elements to be precomputed. consider number 5 instead.
(ok) template matrix type for AD variables
(ok,test) template container type for data storage.
(ok,test) option for wrapper container around provided data storage. (to perform matrix operations with existing data)
consider multi-threaded stack to get space for matrices for local operations and returns.
class SubMatrix for fortran-like access to matrix.
Uniform implementation of algorithms for Matrix and Submatrix. to achieve: make abdstract class with abstract element access operator, make matrix and submatrix ancestors of that class
maybe instead of forming set of deleted elements and subtracting set from other sets it is better to remove each modified element (done, check and compare)
parent/child elements in set would not be replaced or reconnected, this may lead to wrong behavior (done, check and compare)
invoking function before loading mesh will not renew global identificators after load but would not unset have_global_id either. There are probably too many places when global ids may become invalid but no flag will be set. It may be benefitial to set such flags along with updating geometrical data which seems to be maintained fairly well during mesh modification
When loading mesh with the same tag name but different type or size, load will fail.
When loading tags in internal format should remember definition and sparsity masks for subsequent data loading. This will cure the case when tags were already previously defined on mesh with different masks and data will be read incorrectly.
introduce "TEMPORARY_KEEP_GHOSTED" tag that will store processors on which copy of element should be kept, internally just merge it with "TEMPORARY_NEW_PROCESSORS" tag this will allow user to control ghosting of certain elements and not to invoke ExchangeMarked every time after Redistribute. This is probably already done using Mesh::SendtoTag, because function fills it without clearing and ExchangeMarked performs initial action based on SendtoTag, it is due to check that SendtoTag is properly merged with "TEMPORARY_NEW_PROCESSORS" before call to ExchangeMarked and received elements are not deleted by accident.
let user provide any integer tag as input without involving RedistributeTag
Exchanging DATA_REFERENCE,DATA_REMOTE_REFERENCE tags not implemented, this is due to the absence of any conclusion
on how it should behave: either only search within elements owned by the other processor and establish references and set InvalidHandle() to elements that are not found (fairly easy, will involve search operations to check against owned elements for similar entry, efficient implementation will require bounding search trees (see TODO 49);
or: send all the referenced elements through PackElementsData and establish all the links within elements reproduced by UnpackElementsData (UnpackElementsData calls UnpackTagData with set of unpacked elements using which it will be very comfortable to establish references on remote processor). Drawback is that exchanging laplacian operator in such a manner should result in the whole grid being shared among all the processors.
Currently request for deletion of elements of lower level then cell will be simply ignored, ensure in future that algorithm will properly rise deletion data from lower to upper adjacencies to delete all the upper adjacencies that depend on deleted lower adjacencies