![]() |
You're right, even with a summary. I must have been blind. Thanks again and sorry for the off-topic.
|
I typically run [C]remdups4[/C] after gathering new relations files, though I haven't been keeping track of the % dups.
Currently at 1B uniques as of this morning. |
Latest run is going:
[CODE]Sun Sep 19 16:01:22 2021 commencing relation filtering Sun Sep 19 16:01:22 2021 setting target matrix density to 120.0 Sun Sep 19 16:01:22 2021 estimated available RAM is 192016.8 MB Sun Sep 19 16:01:22 2021 commencing duplicate removal, pass 1 Sun Sep 19 19:12:14 2021 found 498163917 hash collisions in 1834320482 relations Sun Sep 19 19:12:45 2021 commencing duplicate removal, pass 2 Sun Sep 19 19:36:34 2021 found 107391008 duplicates and 1726929474 unique relations Sun Sep 19 19:36:34 2021 memory use: 16280.0 MB Sun Sep 19 19:36:35 2021 reading ideals above 1259929600 Sun Sep 19 19:36:35 2021 commencing singleton removal, initial pass[/CODE] We'll see what happens by morning. Not sure if I quite have enough unique's yet, but it's getting there! |
Good to know.
[QUOTE]found 107391008 duplicates and 1726929474 unique relations[/QUOTE]Why is the duplicate percentage so low, is that after an inital removal already? |
[QUOTE=bur;588215]Good to know.
Why is the duplicate percentage so low, is that after an inital removal already?[/QUOTE] I had been running remdups up until now, but the relations file is taking up too much disk space at this point, so now I'm just appending new rel's and handing the file right to msieve. |
The msieve run with 1.72B uniques succeeded, but generated a 169M x 169M matrix. Trying again now, with 1.9B uniques.
|
What Q value have you sieved up to, and what is the overall duplication rate like? I'd be interested to know how well my estimates from test-sieving have held up.
|
1 Attachment(s)
[QUOTE=ryanp;588270]The msieve run with 1.72B uniques succeeded, but generated a 169M x 169M matrix. Trying again now, with 1.9B uniques.[/QUOTE]
That's the famous cusp (of convergence) |
[QUOTE]That's the famous cusp (of convergence)[/QUOTE]
Interesting, is it valid for most sizes of numbers? That'd be quite helpful. |
[QUOTE=bur;588343]Interesting, is it valid for most sizes of numbers? That'd be quite helpful.[/QUOTE]
Yeah, there are a few threads over years to the same effect. It is. For smaller numbers/projects, it is very difficult to hit the exact cusp, so the usual rule of thumb for automatic scripts is "if you got the matrix, don't try to sieve extra - it will be a wash" (given that filtering itself is an overhead and together with sieving some more - you will not get any time savings). but for large projects, there can be a huge difference. One could check logs that are kept in NFS @ Home collection, while binning for similar size, and very rarely you would see that the runner get a very large matrix and still decides to go along with it, and that's of course a fine personal preference. It depends on resources available to the runner; sometimes it is - optimizing for their own. human time, and sometimes it is the external resources (at the expense of human time for doing filtering twice or more). Many logs (very educationally!) keep several recordings of filtering. |
[QUOTE=charybdis;588273]What Q value have you sieved up to, and what is the overall duplication rate like? I'd be interested to know how well my estimates from test-sieving have held up.[/QUOTE]
That's unfortunately a bit tough to answer, due to the distributed nature of my sieving setup. :smile: In any case, I now have a filtering run going with [C]target_density=130[/C] and 2.024B uniques. If this succeeds, I will hopefully be able to hand it off for frmky for LA. |
Update: with 2.024B uniques and [C]target_density=130[/C], msieve produces a 133.3M x 133.3M matrix:
[CODE]Wed Sep 22 18:39:19 2021 commencing 2-way merge Wed Sep 22 18:43:22 2021 reduce to 304431703 relation sets and 296088199 unique ideals Wed Sep 22 18:43:22 2021 commencing full merge Wed Sep 22 20:02:23 2021 memory use: 32549.0 MB Wed Sep 22 20:02:54 2021 found 134281671 cycles, need 133304399 Wed Sep 22 20:03:51 2021 weight of 133304399 cycles is about 17329637475 (130.00/cycle) Wed Sep 22 20:03:51 2021 distribution of cycle lengths: Wed Sep 22 20:03:51 2021 1 relations: 4545972 Wed Sep 22 20:03:51 2021 2 relations: 7597027 Wed Sep 22 20:03:51 2021 3 relations: 9794025 Wed Sep 22 20:03:51 2021 4 relations: 10584633 Wed Sep 22 20:03:51 2021 5 relations: 11171953 Wed Sep 22 20:03:51 2021 6 relations: 11129931 Wed Sep 22 20:03:51 2021 7 relations: 10851523 Wed Sep 22 20:03:51 2021 8 relations: 10319041 Wed Sep 22 20:03:51 2021 9 relations: 9582186 Wed Sep 22 20:03:51 2021 10+ relations: 47728108 Wed Sep 22 20:03:51 2021 heaviest cycle: 25 relations Wed Sep 22 20:04:20 2021 commencing cycle optimization[/CODE] I think we're at the point of diminishing returns, and frmky has graciously offered to help with the LA at this point. |
frmky reports that this is now in LA on a multi-GPU system, with ~170 hours to go, plus or minus some time depending on his cluster's queueing.
|
Nice, let's hope the 7 will vanish....
|
It can't - the 2^2 guarantees sigma(n) is divisible by sigma(2^2) = 7, so the next term which is sigma(n)-n will also be divisible by 7. Similarly the 7 means that sigma(n) will be divisible by 8, so the next term will also have exactly two factors of 2.
To get rid of the 2^2*7 we would need a term where the 7 is raised to an even power and the remaining prime factors contribute at most two factors of 2 to sigma(n). |
[QUOTE=ryanp;589171]frmky reports that this is now in LA on a multi-GPU system, with ~170 hours to go, plus or minus some time depending on his cluster's queueing.[/QUOTE]
Curiously, [CODE]linear algebra completed 103057 of 127518005 dimensions (0.1%, ETA 75h33m)[/CODE] That's on 160 (!) A64FX nodes. Unfortunately I can't use that many for 3 days... |
[QUOTE=frmky;589361]Curiously,
[CODE]linear algebra completed 103057 of 127518005 dimensions (0.1%, ETA 75h33m)[/CODE] That's on 160 (!) A64FX nodes. Unfortunately I can't use that many for 3 days...[/QUOTE] Still, pretty fun to see them chew through a matrix of this size so quickly! How's the other run progressing? :smile: |
[QUOTE=ryanp;589378]How's the other run progressing? :smile:[/QUOTE]
Live off the cluster: [CODE]linear algebra completed 60978666 of 127518005 dimensions (47.8%, ETA 90h39m)[/CODE] |
Ryan-
Now that sieving was shut down, do you have even a rough guess for Q-range that was sieved? Or the number of raw relations? It's nice to have confirmation that 2G uniques was enough for this job; that means we can go 35/35 and still use msieve for a future C22x GNFS job. Also, did you do any A=32 sieving, or all I=16 (same as A=31)? |
[QUOTE=VBCurtis;589480]Ryan-
Now that sieving was shut down, do you have even a rough guess for Q-range that was sieved? Or the number of raw relations?[/QUOTE] Unfortunately I don't have a log of this now, but I can keep one for future jobs. My methodology is essentially: fire off a large number of CADO jobs with different q ranges in parallel, then periodically gather the gzipped files, concatenate and [C]remdups4[/C] the merged file, and try an msieve run. Then wait a few hours and repeat. Unfortunately, I don't have the logs of the [C]remdups4[/C] output now. |
Update: frmky and I are both running the sqrt phase in parallel across the deps recovered; we should have the factors tonight or tomorrow at the latest.
|
And it's done. The factors are in factordb.
[CODE]Tue Oct 12 05:15:22 2021 p87 factor: 110535070002303791706263409283373412109635198354887941561129889163757972338872098174709 Tue Oct 12 05:15:22 2021 p134 factor: 47695867265009483977210597529529491755498941648953440078428754363841415191304636272672424639534204758691188543500394138070331091135299 [/CODE] |
:bow:
Congrats Ryan and Greg on an enormous job! I make that the 7th largest public GNFS ever? Anyone working the c163 on the next line? |
[QUOTE=charybdis;590260]:bow:
Congrats Ryan and Greg on an enormous job! I make that the 7th largest public GNFS ever? Anyone working the c163 on the next line?[/QUOTE]I'm up for an easy change of pace. I'll knock it off. . . Edit: (oops) Forgot the extra CONGRATS!! for ryanp and frmky. Great run, guys! |
I guess I'm forgetting my manners! Sorry 'bout that!
If charybdis or the others, who currently have the reins wish to run the c163, please do. If no other takers, I'll break it down a little later. . . |
[QUOTE=EdH;590262]If charybdis or the others, who currently have the reins wish to run the c163, please do. If no other takers, I'll break it down a little later. . .[/QUOTE]
Happy to leave it for you. While I remember - Greg do you have the c220 postprocessing log to hand? |
Of course! I cleaned it up a bit and added annotations. I'm still amazed we can complete LA of this size in about a week.
[PASTEBIN]drHgQVW1[/PASTEBIN] |
[QUOTE=charybdis;590298]Happy to leave it for you.
. . .[/QUOTE]Thanks! I'll start it in a bit. I should have factors tomorrow, but whenever I say that, something delays me. (I'll try to mind my manners in the future.) |
Several posts on CADO-NFS server issue were move to [URL="https://www.mersenneforum.org/showthread.php?t=27215"]new thread[/URL] in CADO-NFS sub-forum.
|
[QUOTE=EdH;590325]Thanks! I'll start it in a bit. I should have factors tomorrow,
. . .[/QUOTE]And, submitted to factordb. The new c201 shed a p14 and I'm running ECM up to t50 on the remaining c187 ATM. |
[QUOTE=EdH;590605]And, submitted to factordb. The new c201 shed a p14 and I'm running ECM up to t50 on the remaining c187 ATM.[/QUOTE]t50 completed on c187. Returning it to the Team. . .
|
[QUOTE=EdH;590653]t50 completed on c187. Returning it to the Team. . .[/QUOTE]
Got a nice ECM hit: [CODE]GMP-ECM 7.0.5-dev [configured with GMP 6.2.0, GWNUM 30.4, --enable-asm-redc, --enable-assert] [ECM] Due to incompatible licenses, this binary file must not be distributed. Input number is 4276613063712577368288081530739263591499089531813322097225127270222273056912626535597839021815912255438970339609557004085494461513626726552563672414910189269704371033032701991682419832147 (187 digits) Using B1=850000000, B2=15892628251516, polynomial Dickson(30), sigma=1:3081933653 Step 1 took 2094947ms Step 2 took 503428ms ********** Factor found in step 2: 170637095614559495626472650841173273394976891446968003800528921837984319 Found prime factor of 72 digits: 170637095614559495626472650841173273394976891446968003800528921837984319 Prime cofactor 25062622217696010935394754634200029729816076320687547031252020375807414664369152943984086331891536127598793264044013 has 116 digits[/CODE] |
p72 is a nice factor Ryan!
I finished a t45 on the c158, I'm running t50 now, and will factor it with CADO overnight if I don't find a factor. |
I've hit the c186 blocker @i12601 with a fair bit of ECM, but it didn't crack. Running GNFS on it now.
|
I'm now the second person to know that the C158 factors as
235135428415733585976439325229520790520225044469128317898640986440315363425440537 *193342839483052979098666287649581036752342433417784528705948363890824381981587 CADO log from the job [URL]https://pastebin.com/3XKCNAxr[/URL] |
On the current blocker:
[CODE]Fri Oct 22 00:23:35 2021 Msieve v. 1.54 (SVN Unversioned directory) Fri Oct 22 00:23:35 2021 random seeds: 5cea0abd 0b0b70d1 Fri Oct 22 00:23:35 2021 factoring 555958432502795536088243164830775698917032751414518213911278171078047388114270579974159891342877028645125852716894998202908925879207990171621524796676210279452917835825942186976047219757 (186 digits) Fri Oct 22 00:23:36 2021 no P-1/P+1/ECM available, skipping Fri Oct 22 00:23:36 2021 commencing number field sieve (186-digit input) Fri Oct 22 00:23:36 2021 R0: -945617671864395719846538528725581745 Fri Oct 22 00:23:36 2021 R1: 929931709079663768101 Fri Oct 22 00:23:36 2021 A0: -8772287074681587792891229961468094852568656 Fri Oct 22 00:23:36 2021 A1: 323562416604841464967297736760232146 Fri Oct 22 00:23:36 2021 A2: 277422772718715127786474098251 Fri Oct 22 00:23:36 2021 A3: -901626220424877727177 Fri Oct 22 00:23:36 2021 A4: -492746228506694 Fri Oct 22 00:23:36 2021 A5: 1470600 Fri Oct 22 00:23:36 2021 skew 1.00, size 3.410e-18, alpha -6.359, combined = 1.531e-16 rroots = 5 Fri Oct 22 00:23:36 2021 Fri Oct 22 00:23:36 2021 commencing linear algebra Fri Oct 22 00:23:36 2021 using VBITS=128 Fri Oct 22 00:23:40 2021 read 30734747 cycles Fri Oct 22 00:25:04 2021 cycles contain 103545122 unique relations Fri Oct 22 00:31:43 2021 read 103545122 relations Fri Oct 22 00:35:03 2021 using 20 quadratic characters above 4294917295 Fri Oct 22 00:44:20 2021 building initial matrix Fri Oct 22 01:10:22 2021 memory use: 14734.4 MB Fri Oct 22 01:10:39 2021 read 30734747 cycles Fri Oct 22 01:10:45 2021 matrix is 30734569 x 30734747 (15552.1 MB) with weight 4703455832 (153.03/col) Fri Oct 22 01:10:45 2021 sparse part has weight 3708075492 (120.65/col) Fri Oct 22 01:21:13 2021 filtering completed in 2 passes Fri Oct 22 01:21:20 2021 matrix is 30733618 x 30733796 (15552.0 MB) with weight 4703413066 (153.04/col) Fri Oct 22 01:21:20 2021 sparse part has weight 3708063906 (120.65/col) Fri Oct 22 01:24:50 2021 matrix starts at (0, 0) Fri Oct 22 01:24:55 2021 matrix is 30733618 x 30733796 (15552.0 MB) with weight 4703413066 (153.04/col) Fri Oct 22 01:24:55 2021 sparse part has weight 3708063906 (120.65/col) Fri Oct 22 01:24:55 2021 saving the first 112 matrix rows for later Fri Oct 22 01:25:02 2021 matrix includes 128 packed rows Fri Oct 22 01:25:10 2021 matrix is 30733506 x 30733796 (14426.1 MB) with weight 3658634078 (119.04/col) Fri Oct 22 01:25:10 2021 sparse part has weight 3412922188 (111.05/col) Fri Oct 22 01:25:11 2021 using GPU 0 (NVIDIA A100-SXM4-40GB) Fri Oct 22 01:25:11 2021 selected card has CUDA arch 8.0 Fri Oct 22 01:30:54 2021 commencing Lanczos iteration Fri Oct 22 01:30:56 2021 memory use: 30728.1 MB Fri Oct 22 01:31:02 2021 linear algebra at 0.0%, ETA 27h56m Fri Oct 22 01:31:03 2021 checkpointing every 1230000 dimensions[/CODE] |
Try using VBITS=384 (or 256 if 384 uses too much memory). It should be a bit faster.
|
[QUOTE=frmky;591316]Try using VBITS=384 (or 256 if 384 uses too much memory). It should be a bit faster.[/QUOTE]
Thanks -- good to know for the future, though I probably won't restart this job now. (I asked over in the msieve LA thread how to determine reasonable values for VBITS and block_nzz, other than "trial and error" :smile:) |
Got a few more terms done. Unfortunately, looks like there's a 2^2 · 3^3 · 7 now. Working on ECM for the c196 @i12606.
|
I am working on the c196 at index 12606 now.
|
[QUOTE=ryanp;592547]I am working on the c196 at index 12606 now.[/QUOTE]
[PASTEBIN]cQZ21qTf[/PASTEBIN] |
I've added few iterations, now C156@i12609, t40 was done on it.
|
By the way, 4788 has reached a milestone: it is the first aliquot sequence to reach 225 digits. It is now at index 12624 and has length of 228 digits.
|
The c214 blocker at i12624 has not yielded to 18K curves @ B1=85e7. I am continuing to run ECM on it.
|
I am not actively working on this sequence now. Would consider running GNFS if there's an effort to find a good poly / params.
|
I imagine that after the C221 Cunningham gets the poly/params treatment we will come after this one- likely early in the new year.
|
All times are UTC. The time now is 19:52. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.