![]() |
|
|
#1123 |
|
Sep 2008
Kansas
F7116 Posts |
A few more.
Code:
371109048883173906619284371248459123 8567407566218534935440782493276308384486426482160469595589 4450384264035380012725984900772800153 2109510745446146046741186461434965806278411400457727663363 10756691626332758346172518615651759047 54179099903141674033270550309735762652627453453271 20878280521846713722956508339969 35416899049033957857987739314221030729786082866790029239 |
|
|
|
|
|
#1124 |
|
Jan 2009
Bilbao, Spain
317 Posts |
c100-109 done.
And a few more. 3067 54686870963 12 1304692422293518773677822217791225456137174139385046917146588558843404891276520320434087345874114046894255169721 prp62 = 17822787990167923221400533797555201371279728031664144319609107 prp50 = 73203610064444591289651094846018899918962169603203 1931 22145722742911168829 6 1417649430068875921280053621195314937296403716003645348864287926324863039061554621284160173519686131880305459979 prp42 = 108598843827050694790169478222240888146329 prp71 = 13054001130311813880595488968505436989763011548258495861651533424246851 10187 1142231712199559624771 6 1530212646561805666485600034376744610997965285714276839012798855169106089676066080838613345496155964803568418663 prp61 = 1093526053954783000933904990819892157647388991915329397874811 prp52 = 1399338077979694423495426897486966755170845805020933 1065 5546672454527536383650885469316127 4 1726121950833355261458606172074293273290840103182518695823824913633107975283299105877018384469738088378890192381 prp70 = 2409139895888698751348420154246093612489568966969043366723693270137181 prp42 = 716488882102304186274439251977283685799201 2193 2605561471 12 1847310989152900320657077019844587986258378711483102059227662879189064601163470631958355337991599472984259053341 prp47 = 74137230193277552205279442296745274402395055929 prp65 = 24917453543070275119041899921175033809517171322643699835827706629 |
|
|
|
|
|
#1125 |
|
Just call me Henry
"David"
Sep 2007
Liverpool (GMT/BST)
3×23×89 Posts |
Updated files uploaded
|
|
|
|
|
|
#1126 |
|
Just call me Henry
"David"
Sep 2007
Liverpool (GMT/BST)
3·23·89 Posts |
I have updated the medium file with about 30k factors removed(from 150k numbers).
|
|
|
|
|
|
#1127 |
|
Jan 2009
Bilbao, Spain
13D16 Posts |
c110-111 done.
|
|
|
|
|
|
#1128 |
|
Sep 2008
Kansas
59·67 Posts |
A few more.
Code:
357991222974836890042832840559307890482891 1694220573080162850251105610217703 669196490195284646109228899096931907667284721090317099828697 1915742101513922900042175060480577 3219155779467108645111193271092367 34730367448407443857161907667265300277 18452071754045928411684794026561434912747001687 16444768780795436001792171053500014283507613158831 9298154324683459231917767243260112173117186536299 64601325935284016849064403 14655542336298970599448036681686137624048659 11918036678190078845106963721483 1782598614596357739881691151713022003 |
|
|
|
|
|
#1129 | |
|
"Curtis"
Feb 2005
Riverside, CA
2×2,927 Posts |
Quote:
Testing on 16f with only prime special Q shows 34/32 to be about 10% faster than 33/31, and rlim/alim of 316/134 was faster than 268/134 or 200/200 or 134/268. 3 large primes on r side, 2 on a side. Taking those fastest params from 16f testing to 16e was 4 times slower at Q=40M!! On small Q, the very large rlim is important to yield. This suggests CADO is a good plan for this job. Then I compared CADO sieving on A=30 and I=16, both with adjust_strategy = 2. At Q=30M, A-30 is almost exactly twice as fast as I=16. Yield is 4.6 with A=30, around 7.5 with I=16. I am now testing I=15 as well as higher Q values. My initial forecast is 10 machine-months on the host machine, or 4.5-5 Ryzen 5950 machine-months. I'm willing to do about 25% of the sieving for this job as well as the post-processing, but we do not yet have enough interest in the job to have the other 70% covered. Kruoli offered a minimum of 10% of the job (translated from 8 zen core-months). RichD offered to help, a guess of a quad-core for the duration would be 5% of the job minimum. I'll keep testing to determine best params and refine my work estimate, but I don't think we have enough interest yet to run the job. Edit: One solution is to run small Q as a team on CADO, and larger Q on f-small nfs@home. We reap the higher speeds of CADO on small Q relative to 16e, while only doing the amount of work we're collectively comfortable with. This is the approach I've been using on the jobs I send to f-small already- sieve the smallest Q locally, use public resource where is it fastest. Last fiddled with by VBCurtis on 2023-05-21 at 19:36 |
|
|
|
|
|
|
#1130 | |
|
Sep 2008
Kansas
59×67 Posts |
Quote:
I like the idea of local (team) sieving on CADO then include NFS@Home but the queue wait is quiet long. Maybe David will have a comment in the morning. |
|
|
|
|
|
|
#1131 |
|
Just call me Henry
"David"
Sep 2007
Liverpool (GMT/BST)
10111111111012 Posts |
I just have 1 skylake quad, although I can't really fully commit it for this amount of time as that leaves me with nothing else.
I quite like the idea of doing the CADO/nfs@home split VBCurtis mentioned. I have been doing some fiddling with the batch function of CADO. My experiments indicate that there are quite a few more relations to be found with expanded parameters. Unfortunately pushing the batch factoring to 34 bits is more difficult as it can't write the batch product to file at that size due to gmp limitations. Due to this a limit of 33 is imposed although if this is turned off(by recompiling) and the batch product isn't saved it still works fine. Due to this issue, my experiments described below have the batch limited to 33 bits. For 10sq at 30M with A=30, lim0=300e6, lim1=150e6: edit: Discovered my fb limits were swapped. They were actually lim0=150e6, lim1=300e6 by mistake. No batch: lpb 34/32 mfb 102/64: 782 relations batch to 33/32 with mfb0 102 after batch(at least 1 factor must be found by batch): lpb 34/32 mfb 136/64: 1149 relations No batch: lpb 33/32 mfb 99/64: 463 relations batch to 33/32: lpb 33/32 mfb 132/64: 629 relations I will experiment a bit more this evening(including batch to 34/32). Timings aren't quite keeping up with the number of additional relations currently; however, I suspect that this may be tweaked to be beneficial. Also the batch factorisation doesn't scale linearly at only 10sq and more would be more efficient. The best way to use this would probably be to use -batch-print-survivors and do as large chunks as possible. One concern is that this may not work with the client/server script. Last fiddled with by henryzz on 2023-05-22 at 08:52 |
|
|
|
|
|
#1132 |
|
Jun 2012
5×11×73 Posts |
I’m willing to pitch in on a group sieving effort. My biggest machine is a 2x 12-core Haswell with 128 Gb RAM. If my other workload allows, I will throw a second i7 into the mix. But I agree that more participants are needed or this project could take months to complete.
Will watch this thread for the particulars. |
|
|
|
|
|
#1133 | |
|
"Curtis"
Feb 2005
Riverside, CA
2×2,927 Posts |
Quote:
I've not heard of the option you're experimenting with, and I am quite curious. If we get three weeks from Sean and two from David, we have enough workers to do half this job on CADO within 3-4 weeks. I give exams this week, but I'll find some time to test-sieve with 16e and find the best Q-range to get half the job done on nfs@home. Perhaps since it would be a very short job by f-small standards we might get to skip ahead of a couple other jobs and have both halves of the sieving done by early July. |
|
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Passive Pascal | Xyzzy | GPU Computing | 1 | 2017-05-17 20:22 |
| Tesla P100 — 5.4 DP TeraFLOPS — Pascal | Mark Rose | GPU Computing | 52 | 2016-07-02 12:11 |
| Nvidia Pascal, a third of DP | firejuggler | GPU Computing | 12 | 2016-02-23 06:55 |
| Calculating perfect numbers in Pascal | Elhueno | Homework Help | 5 | 2008-06-12 16:37 |
| Factorization attempt to a c163 - a new Odd Perfect Number roadblock | jchein1 | Factoring | 30 | 2005-05-30 14:43 |