mersenneforum.org Reservations
 User Name Remember Me? Password
 Register FAQ Search Today's Posts Mark Forums Read

 2016-07-02, 08:25 #1 ET_ Banned     "Luigi" Aug 2002 Team Italia 10010101000002 Posts Reservations Reserving N=35, k=470P-500P My PC with the CUDA environment broke again. Range completed up to k=480P. Last fiddled with by ET_ on 2016-11-09 at 14:03
2016-12-04, 23:46   #2
rogue

"Mark"
Apr 2003
Between here and the

135008 Posts

Per my message in the other thread, I have created this file of completed ranges. It is easy enough to put as html as I wrote a program to generate it.

I have also created a file of gaps. The ranges.txt file shows what it would look like if you build ranges as I suggested. The gaps file is a list of gaps based upon those ranges. This could be due to cherry-picking or people just not being careful or people just not caring. Fortunately there are few pages for n < 10000 and it should be easy to create "most wanted" ranges from that. I do not know how much CPU/GPU power is needed to address those most wanted ranges.
Attached Files
 ranges.txt (7.7 KB, 249 views) gaps.7z (12.1 KB, 117 views)

2016-12-05, 11:51   #3
ET_
Banned

"Luigi"
Aug 2002
Team Italia

25·149 Posts

Quote:
 Originally Posted by rogue Per my message in the other thread, I have created this file of completed ranges. It is easy enough to put as html as I wrote a program to generate it. I have also created a file of gaps. The ranges.txt file shows what it would look like if you build ranges as I suggested. The gaps file is a list of gaps based upon those ranges. This could be due to cherry-picking or people just not being careful or people just not caring. Fortunately there are few pages for n < 10000 and it should be easy to create "most wanted" ranges from that. I do not know how much CPU/GPU power is needed to address those most wanted ranges.
Hi Mark, I am somewhat distracted in this period of time, as I am planning a move from Rome to Bergamo, and am having issues in following your idea about ranges...

You provided me with the ranges.txt file and the gaps.txt file.

Ranges.txt represents the value of the k limit where each range should be extended: such limit will be then used to deliver the next fixed-k ranges.

Gaps.txt represents the "holes" to the previous scenario, holes that should be fixed as soon as possible to grant the appropriate reservation policy.

The ranges in the gaps.txt file should be assigned and completed through the "most wanted" page.

It is auspicable that each gap had the CPU power needed for completion.

Did I get the task right?

Luigi

Last fiddled with by ET_ on 2016-12-05 at 11:52

2016-12-05, 14:07   #4
rogue

"Mark"
Apr 2003
Between here and the

26·3·31 Posts

Quote:
 Originally Posted by ET_ Did I get the task right?
Yes.

I hope that my suggestions are reasonable. I would like to believe that they would save you a lot of time moving forward.

I have a couple of other minor suggestions. First, for any n < 28 it appears that ECM is the best means at finding new factors. You might want to spell that out. Based upon the B1 used and the number of curves, it is unlikely that any of those numbers have unknown factors under xx digits which is well beyond the limits of trial factoring. Second, I don't think that you should waste your time managing any n >= 200000. That is what PrimeGrid is working on (for k < 10000). You can show completed work (if you want), but don't manage reservations. Most projects enforce some limits so that the project manager can maintain their sanity. You should do the same.

I have attached the program that I used to generate those files (change the extension from .txt to .c). I took the data from your website and converted to CSV (also attached) but excluded all ecm work. If anyone wants to verify correctness of the results, feel free to do so. It runs in a few minutes.
Attached Files
 completed.7z (21.2 KB, 82 views) flist.txt (7.2 KB, 941 views)

2016-12-05, 14:39   #5
ET_
Banned

"Luigi"
Aug 2002
Team Italia

10010101000002 Posts

Quote:
 Originally Posted by rogue I have a couple of other minor suggestions. First, for any n < 28 it appears that ECM is the best means at finding new factors. You might want to spell that out. Based upon the B1 used and the number of curves, it is unlikely that any of those numbers have unknown factors under xx digits which is well beyond the limits of trial factoring. Second, I don't think that you should waste your time managing any n >= 200000. That is what PrimeGrid is working on (for k < 10000). You can show completed work (if you want), but don't manage reservations. Most projects enforce some limits so that the project manager can maintain their sanity. You should do the same.
Suggestions accepted
Apart from Payam Samidoost, I was aware of PrimeGrid work, and stopped reservations above N > 100,000.
And I said Patrick that his work was appreciated but not quite necessary. He simply answered that 99% was not enough for him

Another consideration came to my mind...
Higher Ns (say above 6,000) would reqire a presieve of ks and a later treatment with pfgw: taking them one after the other woud be a waste of time.

Do you think the one who choose a range should do his sieving work, or do you propend for a big, global presieving of ranges, to offer selected candidates instead of complete ranges?

OK, time to do some more development. I will add some description lines to the new download page, and then start the new ranges section.

Luigi

2016-12-05, 14:53   #6
rogue

"Mark"
Apr 2003
Between here and the

26×3×31 Posts

Quote:
 Originally Posted by ET_ Another consideration came to my mind... Higher Ns (say above 6,000) would reqire a presieve of ks and a later treatment with pfgw: taking them one after the other woud be a waste of time. Do you think the one who choose a range should do his sieving work, or do you propend for a big, global presieving of ranges, to offer selected candidates instead of complete ranges?
I haven't thought about that and I don't have much of an opinion. If anything someone could pre-sieve the gaps for n >= 6000.

 2016-12-06, 14:22 #7 rogue     "Mark" Apr 2003 Between here and the 135008 Posts I made a change to summarize the gaps more clearly. This makes the gaps appear to be much more manageable. Can you estimate the time for each range (presuming using optimal software)? Code: 172-179 281000000000000 350000000000000 6151-6199 400000000 2500000000 10000 253000000 269000000 10001 50000000 269000000 10003-10999 50000000 269000000 14501-14999 30000000 70000000 80001-80999 80000 100000 114000-119999 40000 99999 120001-129999 30000 40000 130001-139999 20000 30000 160001-169999 15000 20000 180000 15000 30000 180001-181742 13000 30000 181744-189999 13000 30000 Last fiddled with by rogue on 2016-12-06 at 14:22
 2016-12-06, 15:11 #8 ET_ Banned     "Luigi" Aug 2002 Team Italia 25×149 Posts Code: 172-179 281000000000000 350000000000000 615 days with Feromant_CUDA 6151-6199 400000000 2500000000 1500 days with a single-core pmfs 10000 253000000 269000000 19 hours with a single-core pmfs 10001 50000000 269000000 11 days with a single-core pmfs 10003-10999 50000000 269000000 5913 days with ppsieve_cuda & pfgw 14501-14999 30000000 70000000 640 days with ppsieve_cuda & pfgw 80001-80999 80000 100000 150 days with ppsieve_cuda & pfgw 114000-119999 40000 99999 12 years with ppsieve_cuda and pfgw 120001-129999 30000 40000 4 years with ppsieve_cuda & pfgw 130001-139999 20000 30000 5 years with ppsieve_cuda & pfgw 160001-169999 15000 20000 3 years with ppsieve_cuda & pfgw 180000 15000 30000 no data available 180001-181742 13000 30000 no data available 181744-189999 13000 30000 no data available
2016-12-06, 16:36   #9
rogue

"Mark"
Apr 2003
Between here and the

26×3×31 Posts

Quote:
 Originally Posted by ET_ Code: 172-179 281000000000000 350000000000000 615 days with Feromant_CUDA 6151-6199 400000000 2500000000 1500 days with a single-core pmfs 10000 253000000 269000000 19 hours with a single-core pmfs 10001 50000000 269000000 11 days with a single-core pmfs 10003-10999 50000000 269000000 5913 days with ppsieve_cuda & pfgw 14501-14999 30000000 70000000 640 days with ppsieve_cuda & pfgw 80001-80999 80000 100000 150 days with ppsieve_cuda & pfgw 114000-119999 40000 99999 12 years with ppsieve_cuda and pfgw 120001-129999 30000 40000 4 years with ppsieve_cuda & pfgw 130001-139999 20000 30000 5 years with ppsieve_cuda & pfgw 160001-169999 15000 20000 3 years with ppsieve_cuda & pfgw 180000 15000 30000 no data available 180001-181742 13000 30000 no data available 181744-189999 13000 30000 no data available
Fortunately some of those should be really easy to knock off. Sounds like a task for Gary. What do you mean by "no data available"? the 180000 line for 15000 to 30000 should be doable in less than a day. Are the pfgw times for a single core?

Last fiddled with by rogue on 2016-12-06 at 17:15

2016-12-06, 16:38   #10
feromant

"Roman"
Dec 2016
Everywhere

1A16 Posts

Quote:
 Originally Posted by ET_ Code: 172-179 281000000000000 350000000000000 615 days with Feromant_CUDA 6151-6199 400000000 2500000000 1500 days with a single-core pmfs 10000 253000000 269000000 19 hours with a single-core pmfs 10001 50000000 269000000 11 days with a single-core pmfs 10003-10999 50000000 269000000 5913 days with ppsieve_cuda & pfgw 14501-14999 30000000 70000000 640 days with ppsieve_cuda & pfgw 80001-80999 80000 100000 150 days with ppsieve_cuda & pfgw 114000-119999 40000 99999 12 years with ppsieve_cuda and pfgw 120001-129999 30000 40000 4 years with ppsieve_cuda & pfgw 130001-139999 20000 30000 5 years with ppsieve_cuda & pfgw 160001-169999 15000 20000 3 years with ppsieve_cuda & pfgw 180000 15000 30000 no data available 180001-181742 13000 30000 no data available 181744-189999 13000 30000 no data available
I don't agree with the assessment of execution time of the first range. 24/7 Feromant_CUDA will require approximately 35 days. But mmff will count faster, it will take approximately 15 days

2016-12-06, 16:52   #11
ET_
Banned

"Luigi"
Aug 2002
Team Italia

112408 Posts

Quote:
 Originally Posted by feromant I don't agree with the assessment of execution time of the first range. 24/7 Feromant_CUDA will require approximately 35 days. But mmff will count faster, it will take approximately 15 days
From the data you sent me, it seems that Feromant_CUDA elaborates about 9.1 million k per second at N=175.

(350000000000000−281000000000000)×(179−172) = 4,83×10¹⁴ total k
(4,83×10¹⁴)÷9100000 = 53076923,076923077 seconds

53076923,076923077 / 86400 ) = 614,316239316 days.

Where did I go wrong? My BAD I read Feromant instead of Feromant_CUDA values Sorry Roman you are right, and 35 days is correct.

mmff can't be used for such N and k, because of the bit depth of kernels.

Luigi

Last fiddled with by ET_ on 2016-12-06 at 17:00 Reason: Roman is right.

 Similar Threads Thread Thread Starter Forum Replies Last Post ET_ Operazione Doppi Mersennes 490 2020-10-12 15:49 kar_bon Riesel Prime Data Collecting (k*2^n-1) 129 2016-09-05 09:23 R.D. Silverman NFS@Home 15 2015-11-29 23:18 R.D. Silverman Cunningham Tables 15 2011-03-04 21:01 paulunderwood 3*2^n-1 Search 15 2008-06-08 03:29

All times are UTC. The time now is 03:38.

Tue Oct 27 03:38:33 UTC 2020 up 47 days, 49 mins, 0 users, load averages: 1.77, 1.96, 1.91