Monday 28 July 2014

Week 10

This week, I removed the redundant store_trans parameter from the context object. Now, the capturing matrix is always updated unless the user passes NULL as the value. Also, in the special case that the user inputs a d x d identity and the number of columns of the input basis is greater than the number of rows (i.e. the embedding dimension is greater than the lattice dimension), I avoid updating the vectors during reduction itself and instead do a matrix multiplication at the end.

Also, as the mpf and wrapper functions of the LLL subroutines are supposed to guarantee that the output is indeed LLL-reduced, we need to check this in the most efficient way. This means that a fp test for reducedness should be used prior to using the exact arithmetic version (which is slower). Thus, I've added is_reduced functions in the module which first test using doubles, then mpfs if the matrix is certified non-reduced in the first test and finally, fmpq.

An initial implementation of the ULLL function was also added. It is different from the one in flint-1.6 because it does not perform any adjoining operations to the matrix. Instead, the option to store the unimodular transformations is utilised here. Also, the original did not use recursion on the truncated data, but the current code does. However, it needs to be tested.

This week, I plan to add test code and documentation for the ULLL function and document the functions for checking LLL-reducedness.

Tuesday 22 July 2014

Week 9

This week, I updated the removals code to use a heuristic lower bound, mentioned by Curtis on flint-devel, on the final GS norm while removing the last vector during the execution of the reduction algorithm (after a failed Lovasz test involving kappa = d - 1). This was required because the proof of a theorem in the L^2 paper assumes that the final vector is LLL-reduced while deriving the error bound on the accuracy of the norm. This, of course, doesn't matter in the case of standard LLL but matters here because we need to be sure about the accuracy, lest we may remove something useful.

Besides this, I also unified the code for the cases where the exact Gram matrix is computed or input as mentioned in the todo section of my previous post. This required factoring out the row exponents from the Gram matrix rather than the basis itself, because the latter is not always be available (i.e. fl->rt == GRAM).

Also, I changed the code dealing with the matrix capturing the unimodular transformations to not make any assumptions regarding its column dimension. Earlier, I was assuming U to be a d x d matrix which was to be updated to satisfy the relation B* = UB where B* is the basis obtained by LLL-reducing B, i.e. U was the change-of-basis matrix. However, now U can be any matrix with the same number of rows as B.

I think the main implementation of LLL and LLL with removals is completed now, modulo a few (hopefully minor) changes. So, I plan to at least start working on the ULLL function this week.

Monday 14 July 2014

Week 8

The eighth week of GSoC involved a lot of debugging to find the reasons for the bad behaviour of the removals function. In theory, the last vector can be removed from the basis if its squared GS length becomes greater than the bound at any point during the execution of the algorithm. However, it is not so straightforward in practice if the GSO is known only approximately. I think this may be a reason why the version in flint-1.6 removed vectors at the end as the algorithm seems to show much better behaviour in this case, i.e. the norms were more accurate.

I added the wrapper function for  LLL with removals optimised for knapsack lattices. Test code for this was also written.  Knapsack LLL differs from the textbook version in the fact that it performs early size reductions on the input basis occasionally which speeds up things in the knapsack case. Speaking of LLL with removals, the code was modified to remove the numerical inaccuracies plaguing it. Earlier, due to a floating point approximation of the Gram Schmidt orthogonalisation being used, the norm was incorrectly flagged as being greater than the bound which led to the removal of some useful vectors. The documentation was also updated and is now up to date with the module. Thus, the prerequisites for ULLL have now been completed.

This week I plan to unify the code for performing LLL on the input matrix in the cases that it is a lattice basis and an exact Gram matrix is to be used for computing the GSO or it is the Gram matrix itself.

Tuesday 8 July 2014

Week 7

This week, I added support for LLL with removals on Gram matrix. This finds application in vector rational number reconstruction. Also, the inaccuracy checks (incorrect Lovasz tests and too many Babai loops) were improved to be similar to those used in fpLLL.

Also, the LLL function specialized to knapsack-type lattices, for which the Babai functions were implemented last week was added. It performs early size reductions which tend to make things faster for knapsack problem type lattice bases. It is also a prerequisite for the ULLL with removals function. Another feature implemented was early removal of vectors in LLL with removals. Now, the vectors whose squared GS lengths are greater than the input bound are removed from the basis during the reduction algorithm itself to avoid unnecessary overhead involved in keeping them updated during the computations.

This week, I plan to write the wrapper function for knapsack LLL with removals and fix any loose ends which may be remaining. The plan is to get the existing module to ready before I start working on the actual ULLL algorithm itself.

Thursday 3 July 2014

Week 6

Well, this week's update is late. Sorry for that. I'll try to summarize the preceding week here. I added the heuristic version of LLL with removals, along with it's test code and documentation. In the MPFR version in flint-1.6, the squared GS length is divided by 8 instead of 2 as mentioned in the comments. I don't know if this an overlook or extra precaution. If the latter, I see no reason for this though. As I mentioned in my previous post, however, I avoid division altogether. LLL with removals was completed when its arbitrary precision variant was added and the wrapper function written. Its return value is the new dimension of the basis to be considered for further computation.

This brings us to the third major and perhaps, the most important part of this project: implementing ULLL with removals. This is because of its property of sub-quadratic time complexity in the size of the entries. Also, it is numerically more stable. It isn't mentioned in literature and hence, Bill graciously offered to write a paper on this for reference. Before I start work on the actual ULLL function, however, I need to implement an LLL with removals optimised for knapsack type lattices as it is used in ULLL. This requires a few Babai-like functions as well. These procedures will only reduce the kappa'th vector against the vectors upto cur_kappa (which is an index before which the basis is assumed to be LLL reduced) and not kappa - 1. The Babai functions added differ in their way of computing the dot products like the former versions

Along with the progress reported on the mailing list, one important thing to do now is to update the documentation which is lagging behind. I hope I'll be able to find time to do this task this week.

"The documentation needs documentation."   -- a Bellevue Linux Users Group member, 2005

Monday 23 June 2014

Week 5

The previous week was quite interesting. I wrote the wrapper function fmpz_lll_wrapper and documented it. Test code for it was also added. Thus, the first milestone of the project was reached. Improvements to the module implemented thus far were made according to mentor comments. In particular, is_reduced functions and fmpz_mat_lll were improved to avoid storing GS vectors as suggested by Curtis on the mailing list. The function fmpz_lll_wrapper should now provide functionality identical to fpLLL and flint-1.6, but an additional feature in this version is that the Gram matrix can also be supplied for reduction instead of the lattice basis. It should be useful because the only other software which I've seen that allows passing a Gram matrix as input is Magma, which isn't open source.

The next step is LLL with removals. There seem to be 2 definitions for LLL_with_removal in literature: Both versions accept a basis (say B) and a bound (say N). Now, the intuitive definition seems to be that B is LLL reduced with bound N if for every vector b in B, norm(b) <= N. However, according to these papers by Mark van Hoeij, et al., it actually deals with the length of the Gram-Schmidt (GS) vectors i.e. the last vector is removed if its GS length is greater than N. Of course, for computational simplicity (and accuracy) I accept the squared norm bounding the GS length. Another point to be observed is that in flint-1.6, the bound is compared with half the squared GS length, probably because a fp approximation of the GS lengths is used. This is done to avoid removing something valuable. Also, the documentation is a bit ambiguous as it mentions that the bound is for the "target vectors". I'm going with van Hoeij's definition because he mentions it in the context of factoring polynomials which applies to a possible use of LLL in FLINT.
I am not sure if LLL with removals requires a version for the Gram matrix as input, as the only mentions of it in literature relate to factoring polynomials which input a lattice basis to the procedure. So, I haven't written a Gram matrix variant for now. Although, I may implement it later, if it's good to have.
My version of LLL_with_removal works even when I directly compare the bound with the squared GS norm because I avoid conversion to doubles and instead compare fmpz_t's. This was ascertained from the test code for fmpz_lll_d_with_removal. Documentation for fmpz_lll_d_with_removal and fmpz_mat_is_reduced_with_removal in the corresponding modules was added.

This week, I plan to write the fmpz_lll_d_heuristic_with_removal function, along with its test code and documentation. If things are smooth, maybe I can even work on the arbitrary precision variant.

Monday 16 June 2014

Week 4

This last week has been a productive one. I implemented check_babai_heuristic (the Babai version using mpfs) and documented it.  Helper functions for this were also implemented in the fmpz and fmpz_vec modules. The fmpz_lll_mpf2 function was also written. The "2" in the name of the function signifies that it takes the precision to be used for storing the temporary variables and the GSO data (also the approximate Gram matrix if fl->rt == APPROX) as arguments, or in other words mpf_init2 is used for initialising any intermediate mpf_t's used. The wrapper for this is fmpz_lll_mpf which increases the precision until the LLL is successful or God forbid, the precision maxes out. Test code for lll_mpf was added. Also, the test code now uses the fmpz_mat versions of is_reduced and is_reduced_gram which use exact arithmetic to check if a basis is LLL reduced and help cover edge cases.

Also, there's some good news to report. The lll_d and lll_d_heuristic functions now work on all test cases without failure for Z_BASIS as input matrix! With GRAM, they fail sometimes due to inadequate precision of double to store large values. I can confirm that fmpz_lll_d works for the 35 dimensional lattice on the web page for the L^2 paper by Nguyen and Stehle (tested against fplll-4.0.4). I also tested fmpz_lll_mpf with the 55-dimensional lattice which makes NTL's LLL_FP (with delta=0.99) loop forever. It works! :) The output produced is the same as that if the matrix is passed to fpLLL with the "-m proved" option.

This week, I look forward to writing the LLL wrapper fmpz_lll_wrapper and documenting it besides improving the part of the module implemented so far and fixing bugs, if any. I also plan to document those functions which were left undocumented and shift fmpq_mat_lll to the fmpz_mat module and rename it to avoid confusion.