mersenneforum.org  

Go Back   mersenneforum.org > Factoring Projects > Factoring

Reply
 
Thread Tools
Old 2023-05-18, 20:54   #1123
RichD
 
RichD's Avatar
 
Sep 2008
Kansas

59·67 Posts
Default

A few more.
Code:
371109048883173906619284371248459123
8567407566218534935440782493276308384486426482160469595589
4450384264035380012725984900772800153
2109510745446146046741186461434965806278411400457727663363
10756691626332758346172518615651759047
54179099903141674033270550309735762652627453453271
20878280521846713722956508339969
35416899049033957857987739314221030729786082866790029239
RichD is offline   Reply With Quote
Old 2023-05-19, 12:24   #1124
mataje
 
mataje's Avatar
 
Jan 2009
Bilbao, Spain

13D16 Posts
Default

c100-109 done.
And a few more.

3067 54686870963 12 1304692422293518773677822217791225456137174139385046917146588558843404891276520320434087345874114046894255169721
prp62 = 17822787990167923221400533797555201371279728031664144319609107
prp50 = 73203610064444591289651094846018899918962169603203

1931 22145722742911168829 6 1417649430068875921280053621195314937296403716003645348864287926324863039061554621284160173519686131880305459979
prp42 = 108598843827050694790169478222240888146329
prp71 = 13054001130311813880595488968505436989763011548258495861651533424246851

10187 1142231712199559624771 6 1530212646561805666485600034376744610997965285714276839012798855169106089676066080838613345496155964803568418663
prp61 = 1093526053954783000933904990819892157647388991915329397874811
prp52 = 1399338077979694423495426897486966755170845805020933

1065 5546672454527536383650885469316127 4 1726121950833355261458606172074293273290840103182518695823824913633107975283299105877018384469738088378890192381
prp70 = 2409139895888698751348420154246093612489568966969043366723693270137181
prp42 = 716488882102304186274439251977283685799201

2193 2605561471 12 1847310989152900320657077019844587986258378711483102059227662879189064601163470631958355337991599472984259053341
prp47 = 74137230193277552205279442296745274402395055929
prp65 = 24917453543070275119041899921175033809517171322643699835827706629
Attached Files
File Type: txt c100-109.txt (9.9 KB, 9 views)
mataje is offline   Reply With Quote
Old 2023-05-19, 18:59   #1125
henryzz
Just call me Henry
 
henryzz's Avatar
 
"David"
Sep 2007
Liverpool (GMT/BST)

3×23×89 Posts
Default

Updated files uploaded
henryzz is offline   Reply With Quote
Old 2023-05-21, 12:19   #1126
henryzz
Just call me Henry
 
henryzz's Avatar
 
"David"
Sep 2007
Liverpool (GMT/BST)

3·23·89 Posts
Default

I have updated the medium file with about 30k factors removed(from 150k numbers).
henryzz is offline   Reply With Quote
Old 2023-05-21, 13:54   #1127
mataje
 
mataje's Avatar
 
Jan 2009
Bilbao, Spain

317 Posts
Default

c110-111 done.
Attached Files
File Type: txt c110-111.txt (1.9 KB, 11 views)
mataje is offline   Reply With Quote
Old 2023-05-21, 17:27   #1128
RichD
 
RichD's Avatar
 
Sep 2008
Kansas

59·67 Posts
Default

A few more.
Code:
357991222974836890042832840559307890482891
1694220573080162850251105610217703
669196490195284646109228899096931907667284721090317099828697
1915742101513922900042175060480577
3219155779467108645111193271092367
34730367448407443857161907667265300277
18452071754045928411684794026561434912747001687
16444768780795436001792171053500014283507613158831
9298154324683459231917767243260112173117186536299
64601325935284016849064403
14655542336298970599448036681686137624048659
11918036678190078845106963721483
1782598614596357739881691151713022003
RichD is offline   Reply With Quote
Old 2023-05-21, 19:24   #1129
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

2·2,927 Posts
Default

Quote:
Originally Posted by charybdis View Post
If you're using CADO, I suspect this may be in the range where A=30 with adjust-strategy=2 is fastest. Large prime bounds skewed towards the rational side, 33/31 or 34/32.

My cores are busy with Cunninghams so will not contribute to this - @RichD that'll help you get a higher percentage
Charybdis is (per usual) accurate with his forecast.

Testing on 16f with only prime special Q shows 34/32 to be about 10% faster than 33/31, and rlim/alim of 316/134 was faster than 268/134 or 200/200 or 134/268. 3 large primes on r side, 2 on a side.

Taking those fastest params from 16f testing to 16e was 4 times slower at Q=40M!! On small Q, the very large rlim is important to yield. This suggests CADO is a good plan for this job.

Then I compared CADO sieving on A=30 and I=16, both with adjust_strategy = 2. At Q=30M, A-30 is almost exactly twice as fast as I=16. Yield is 4.6 with A=30, around 7.5 with I=16.

I am now testing I=15 as well as higher Q values. My initial forecast is 10 machine-months on the host machine, or 4.5-5 Ryzen 5950 machine-months. I'm willing to do about 25% of the sieving for this job as well as the post-processing, but we do not yet have enough interest in the job to have the other 70% covered.

Kruoli offered a minimum of 10% of the job (translated from 8 zen core-months). RichD offered to help, a guess of a quad-core for the duration would be 5% of the job minimum.

I'll keep testing to determine best params and refine my work estimate, but I don't think we have enough interest yet to run the job.
Edit: One solution is to run small Q as a team on CADO, and larger Q on f-small nfs@home. We reap the higher speeds of CADO on small Q relative to 16e, while only doing the amount of work we're collectively comfortable with. This is the approach I've been using on the jobs I send to f-small already- sieve the smallest Q locally, use public resource where is it fastest.

Last fiddled with by VBCurtis on 2023-05-21 at 19:36
VBCurtis is offline   Reply With Quote
Old 2023-05-22, 02:56   #1130
RichD
 
RichD's Avatar
 
Sep 2008
Kansas

59×67 Posts
Default

Quote:
Originally Posted by VBCurtis View Post
RichD offered to help, a guess of a quad-core for the duration would be 5% of the job minimum.
I have two Sandy Bridge boxes but one is so old in keep rebooting or freezing up. Say 1.5 quad-core (no HT) machines but still much less than 5%.

I like the idea of local (team) sieving on CADO then include NFS@Home but the queue wait is quiet long. Maybe David will have a comment in the morning.
RichD is offline   Reply With Quote
Old 2023-05-22, 07:28   #1131
henryzz
Just call me Henry
 
henryzz's Avatar
 
"David"
Sep 2007
Liverpool (GMT/BST)

137758 Posts
Default

I just have 1 skylake quad, although I can't really fully commit it for this amount of time as that leaves me with nothing else.

I quite like the idea of doing the CADO/nfs@home split VBCurtis mentioned.

I have been doing some fiddling with the batch function of CADO. My experiments indicate that there are quite a few more relations to be found with expanded parameters.
Unfortunately pushing the batch factoring to 34 bits is more difficult as it can't write the batch product to file at that size due to gmp limitations. Due to this a limit of 33 is imposed although if this is turned off(by recompiling) and the batch product isn't saved it still works fine. Due to this issue, my experiments described below have the batch limited to 33 bits.

For 10sq at 30M with A=30, lim0=300e6, lim1=150e6: edit: Discovered my fb limits were swapped. They were actually lim0=150e6, lim1=300e6 by mistake.
No batch: lpb 34/32 mfb 102/64: 782 relations
batch to 33/32 with mfb0 102 after batch(at least 1 factor must be found by batch): lpb 34/32 mfb 136/64: 1149 relations
No batch: lpb 33/32 mfb 99/64: 463 relations
batch to 33/32: lpb 33/32 mfb 132/64: 629 relations

I will experiment a bit more this evening(including batch to 34/32). Timings aren't quite keeping up with the number of additional relations currently; however, I suspect that this may be tweaked to be beneficial. Also the batch factorisation doesn't scale linearly at only 10sq and more would be more efficient. The best way to use this would probably be to use -batch-print-survivors and do as large chunks as possible. One concern is that this may not work with the client/server script.

Last fiddled with by henryzz on 2023-05-22 at 08:52
henryzz is offline   Reply With Quote
Old 2023-05-22, 11:38   #1132
swellman
 
swellman's Avatar
 
Jun 2012

FAF16 Posts
Default

I’m willing to pitch in on a group sieving effort. My biggest machine is a 2x 12-core Haswell with 128 Gb RAM. If my other workload allows, I will throw a second i7 into the mix. But I agree that more participants are needed or this project could take months to complete.

Will watch this thread for the particulars.
swellman is online now   Reply With Quote
Old 2023-05-22, 14:13   #1133
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

2×2,927 Posts
Default

Quote:
Originally Posted by henryzz View Post
I just have 1 skylake quad, although I can't really fully commit it for this amount of time as that leaves me with nothing else.

I quite like the idea of doing the CADO/nfs@home split VBCurtis mentioned.

[....]

I will experiment a bit more this evening(including batch to 34/32). Timings aren't quite keeping up with the number of additional relations currently; however, I suspect that this may be tweaked to be beneficial. Also the batch factorisation doesn't scale linearly at only 10sq and more would be more efficient. The best way to use this would probably be to use -batch-print-survivors and do as large chunks as possible. One concern is that this may not work with the client/server script.
Yield isn't a hurdle on this job, since neither yield nor sec/rel changes much over the expected sieve region. I'm using small mfb's (97/62) for the same reason- sec/rel is about the same, but we will need fewer total relations.
I've not heard of the option you're experimenting with, and I am quite curious.

If we get three weeks from Sean and two from David, we have enough workers to do half this job on CADO within 3-4 weeks. I give exams this week, but I'll find some time to test-sieve with 16e and find the best Q-range to get half the job done on nfs@home. Perhaps since it would be a very short job by f-small standards we might get to skip ahead of a couple other jobs and have both halves of the sieving done by early July.
VBCurtis is offline   Reply With Quote
Reply



Similar Threads
Thread Thread Starter Forum Replies Last Post
Passive Pascal Xyzzy GPU Computing 1 2017-05-17 20:22
Tesla P100 — 5.4 DP TeraFLOPS — Pascal Mark Rose GPU Computing 52 2016-07-02 12:11
Nvidia Pascal, a third of DP firejuggler GPU Computing 12 2016-02-23 06:55
Calculating perfect numbers in Pascal Elhueno Homework Help 5 2008-06-12 16:37
Factorization attempt to a c163 - a new Odd Perfect Number roadblock jchein1 Factoring 30 2005-05-30 14:43

All times are UTC. The time now is 15:43.


Fri Jul 7 15:43:06 UTC 2023 up 323 days, 13:11, 0 users, load averages: 1.49, 1.26, 1.16

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.

≠ ± ∓ ÷ × · − √ ‰ ⊗ ⊕ ⊖ ⊘ ⊙ ≤ ≥ ≦ ≧ ≨ ≩ ≺ ≻ ≼ ≽ ⊏ ⊐ ⊑ ⊒ ² ³ °
∠ ∟ ° ≅ ~ ‖ ⟂ ⫛
≡ ≜ ≈ ∝ ∞ ≪ ≫ ⌊⌋ ⌈⌉ ∘ ∏ ∐ ∑ ∧ ∨ ∩ ∪ ⨀ ⊕ ⊗ 𝖕 𝖖 𝖗 ⊲ ⊳
∅ ∖ ∁ ↦ ↣ ∩ ∪ ⊆ ⊂ ⊄ ⊊ ⊇ ⊃ ⊅ ⊋ ⊖ ∈ ∉ ∋ ∌ ℕ ℤ ℚ ℝ ℂ ℵ ℶ ℷ ℸ 𝓟
¬ ∨ ∧ ⊕ → ← ⇒ ⇐ ⇔ ∀ ∃ ∄ ∴ ∵ ⊤ ⊥ ⊢ ⊨ ⫤ ⊣ … ⋯ ⋮ ⋰ ⋱
∫ ∬ ∭ ∮ ∯ ∰ ∇ ∆ δ ∂ ℱ ℒ ℓ
𝛢𝛼 𝛣𝛽 𝛤𝛾 𝛥𝛿 𝛦𝜀𝜖 𝛧𝜁 𝛨𝜂 𝛩𝜃𝜗 𝛪𝜄 𝛫𝜅 𝛬𝜆 𝛭𝜇 𝛮𝜈 𝛯𝜉 𝛰𝜊 𝛱𝜋 𝛲𝜌 𝛴𝜎𝜍 𝛵𝜏 𝛶𝜐 𝛷𝜙𝜑 𝛸𝜒 𝛹𝜓 𝛺𝜔