mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   CADO-NFS (https://www.mersenneforum.org/forumdisplay.php?f=170)
-   -   CADO NFS (https://www.mersenneforum.org/showthread.php?t=11948)

EdH 2020-04-01 17:03

I have run into a problem that I already have here, but work around:

When I run an snfs job, the server doesn't start any local sievers. Here, I separately start some on the server machine, but I can't figure out how to do so on my Colab session.

-------
When you use remdups, do you decompress the *.gz files first? When I tried to cat all the *.gz files and send them through remdups4, I ended up with 1 good relation and a couple hundred million bad relations.

Xyzzy 2020-04-01 17:53

[C]zcat relations.dat.gz | ./remdups4 10000 > relations.dat[/C]

EdH 2020-04-01 18:33

Thanks, Xyzzy!

So, they must be decompressed first. I've been "cat"ing the .gz files into a composite for msieve, which has no trouble with the relations being compressed. I will play with this for a bit and see whether it makes more sense for me to decompress and use remdups or simply let msieve do its "thing."

EdH 2020-04-01 19:23

I got back 31k relations from my first sieve run in my Colab instance and was able to get those added to the msieve parallel run. Hopefully, that will help with the matrix build.

However, I was too late to try to add the relations to the CADO-NFS run, as it had already declared the move into "Filtering - Merging: Starting."

I wonder if the associated *.stderr0 file for the relations might allow them to be added to the server run, or if just existing within the directory might allow them to be added? Something for another experiment on another day. . .

EdH 2020-04-02 13:34

Well, . . . My server finally declared it had enough relations at 274402507 and told all the clients to, "Knock it off!"

Unfortunately, it then proceeded to get memory stuck using 99.8% of 15.6 GiB RAM and 50% of 8.0 GiB Swap during the merge. I'm leaving it sit out of curiosity for the time being.

Disappointingly, my msieve parallel (to the CADO-NFS) process didn't think the relations sufficient to build a matrix:
[code]**remdups4 was run prior**
Wed Apr 1 20:03:57 2020 found 6580868 hash collisions in 172527688 relations
Wed Apr 1 20:04:05 2020 added 3657741 free relations
Wed Apr 1 20:04:05 2020 commencing duplicate removal, pass 2
Wed Apr 1 20:09:23 2020 found 0 duplicates and 176185429 unique relations
Wed Apr 1 20:09:23 2020 memory use: 506.4 MB
Wed Apr 1 20:09:23 2020 reading ideals above 81068032
Wed Apr 1 20:09:23 2020 commencing singleton removal, initial pass
Wed Apr 1 20:22:05 2020 memory use: 5512.0 MB
Wed Apr 1 20:22:05 2020 reading all ideals from disk
Wed Apr 1 20:22:44 2020 memory use: 3903.9 MB
Wed Apr 1 20:22:52 2020 commencing in-memory singleton removal
Wed Apr 1 20:23:00 2020 begin with 176185429 relations and 182285468 unique ideals
Wed Apr 1 20:24:36 2020 reduce to 77811022 relations and 69781418 ideals in 21 passes
Wed Apr 1 20:24:36 2020 max relations containing the same ideal: 32
Wed Apr 1 20:24:41 2020 reading ideals above 720000
Wed Apr 1 20:24:41 2020 commencing singleton removal, initial pass
Wed Apr 1 20:32:30 2020 memory use: 1506.0 MB
Wed Apr 1 20:32:30 2020 reading all ideals from disk
Wed Apr 1 20:32:57 2020 memory use: 3071.3 MB
Wed Apr 1 20:33:06 2020 keeping 78830602 ideals with weight <= 200, target excess is 406990
Wed Apr 1 20:33:14 2020 commencing in-memory singleton removal
Wed Apr 1 20:33:21 2020 begin with 77811022 relations and 78830602 unique ideals
Wed Apr 1 20:34:02 2020 reduce to 77808983 relations and 78828557 ideals in 6 passes
Wed Apr 1 20:34:02 2020 max relations containing the same ideal: 200
Wed Apr 1 20:34:08 2020 filtering wants 1000000 more relations
[/code]I'm currently doing some more sieving in an attempt to convince msieve it should take on the LA for this project.

On a positive note, I was successful in running the 100k-150k area via my Colab instance and retrieved around 150k relations from that experiment.

RichD 2020-04-02 14:29

From my limited experience, you will need about 180M (or greater) unique relations for a 31-bit job. The sweet spot would be around 200M if you want to build a matrix at TD=120-130.

EdH 2020-04-02 16:29

Thanks richD. I became impatient with my server that was stuck, so I restarted it to gather more relations for my msieve run. This was done partly because I can't get any of my other machines to accept clients when I run a CADO-NFS server. I use the same setup as my current server (excepting, of course, IPs), but cannot get clients running.

EdH 2020-04-07 11:42

[QUOTE=RichD;541596]From my limited experience, you will need about 180M (or greater) unique relations for a 31-bit job. The sweet spot would be around 200M if you want to build a matrix at TD=120-130.[/QUOTE]
Not sure if you're following my current thread on this run (in my blog area), but you were right on with 180M:
[code]
Thu Apr 2 21:15:02 2020 found 0 duplicates and 181713072 unique relations[/code]

EdH 2020-04-10 16:31

I'm working on a Colab project with CADO-NFS and have a couple questions that I will eventually discover the answers to, if I don't get them here, but I thought I'd try here first, and possibly save some work:

1. Is the siever (las) a complete stand-alone program which can run on its own without any of the rest of the CADO-NFS package?

2. Is there a method of compiling las without building the entire CADO-NFS package?

3. Is there any advantage to recompiling las for different Xeon CPUs across different Colab sessions?

Thanks for any thoughts. . .

VBCurtis 2020-04-10 16:49

1. Definitely.
2. Yes, but I don't know how. The documentation refers to the possibility of compiling just las for windows, even though the overall package doesn't compile on windows.
3. Probably? There are massive speed differences among architectures in las, and I don't imagine the range of possible architecture-optimizations exist within each compiled binary; seems particularly unlikely since there is no officially-released binary, rather each one is self-compiled by users by design. This is just a guess based on function, rather than from any personal experience I have with the code.

EdH 2020-04-16 03:13

Well, I'm becoming "annoyed!" I would probably have been through with sieving if I hadn't had to restart the server 8 times to get it to assign WUs. I was sure I'm using the install that never gave this trouble in the past. Is it possible it has something to do with the size of the composite/corresponding params?


All times are UTC. The time now is 12:44.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.