mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Msieve (https://www.mersenneforum.org/forumdisplay.php?f=83)
-   -   Msieve 1.44 feedback (https://www.mersenneforum.org/showthread.php?t=13067)

jasonp 2010-03-06 17:20

Those last parameters are for line sieving using msieve; they're not necessary at all if you don't do any line sieving, and if you do want line sieving and do not specify them then defaults will be chosen for you. Those defaults have not been tuned at all, but the odds are good that it doesn't matter unless you don't use lattice sieving at all.

10metreh 2010-03-06 17:36

[QUOTE=tmorrow;207557]Furthermore, which polynomial from the 7871 in the *.dat.p file do I use? The one with the largest or smallest norm?[/QUOTE]

The e value is the one which is the best indicator and the poly with the largest e is often the best poly, although with larger jobs it may be good to take a few good ones and test them against each other to see which sieves best.

tmorrow 2010-03-06 23:23

Thanks for all the help guys, I've created a *.fb file using the "best" polynomial from *.dat.p following your suggestions. I'm using factmsieve.py and it has moved onto the sieving phase now.

sleigher 2010-03-24 16:54

Jason, I am trying to run this linear algebra stage and am having some error about allocating memory.

[FONT=Courier New]failed to calloc 1726416288 bytes[/FONT]

My command line looks like this

./msieve -nc2 -s bignum.dat -g 0 -r 70000000

Not sure that is right. Someone else said use the -r option to only use the first X amount of relations in the file.

Any idea bout the memory calloc?

10metreh 2010-03-24 17:42

How much RAM does your machine have?

sleigher 2010-03-24 17:50

It has 8 GB of RAM. I tried on that machine and another with 4 GB. Same thing.....

jasonp 2010-03-24 18:17

You were given incorrect advice; -r tells the demo binary to stop [i]sieving[/i] after a given number of relations.

To make msieve run the filtering with less than the complete dataset, use '-nc1 0,<relations_to_use>' then if that succeeds continue with '-nc2 -nc3' in a separate run.

Also, don't blindly trust what other people say, run 'msieve -h' and read what the options really do.

sleigher 2010-03-24 18:24

I did run -h and that's why I said I wasn't sure about that -r option. I guess I didn't know I needed to run filtering first.

Thanks Jason

sleigher 2010-03-24 20:14

So I am trying to understand what is happening when filtering is happening.

It seems that it reads the .dat file and puts the data in the .dat.hc file until the second pass. Then as it does the second pass it puts the output in the .dat.lp file? The hc and lp files are binary and not ascii. Is this accurate to what is happening?

jasonp 2010-03-24 20:48

Duplicate removal uses one set of files, that are input to singleton removal. Singleton removal creates the .lp file and uses it to remove singletons, possibly in several passes that each create a new .lp file, and finally produces a .s file. The .s file becomes the input to the merge phase, where a great deal of magic happens and the .cyc file is created, which provides the mapping between columns of the matrix and a list of line numbers of relations from the .dat file that participate in that column.

I'm being purposely vague because a new filtering run always does each of these steps in order and all of these files are recreated from scratch.

frmky 2010-03-24 23:35

This is in the wrong thread, but I've been meaning to ask this. Is there a reason that the lp file is not deleted after it's no longer needed? It can be a multi-GB file for large jobs, so I've gotten into the habit of deleting it once LA is in progress.


All times are UTC. The time now is 04:50.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.