![]() |
hi,
i think that prime95 v29.7b1 is buggy and there is also a problem with llr v3.8.22 so i will wait for better code ... and do more tf with mfaktc v0.21 |
[QUOTE=lalera;513004]hi,
i think that prime95 v29.7b1 is buggy and there is also a problem with llr v3.8.22 so i will wait for better code ... and do more tf with mfaktc v0.21[/QUOTE] There are bugs for one specific type of work, but for Wagstaff PRP testing there is no problem. The Gerbicz error checking gives extra confidence. |
Grand Prix 2,
How much sieve was done on this? At my pace my range will be completed within 450 days! |
[QUOTE=pinhodecarlos;513043]Grand Prix 2,
How much sieve was done on this? At my pace my range will be completed within 450 days![/QUOTE] In the 10M range, for the remaining unfactored exponents, TF was done to 66 bits by axn, and P−1 was done by me to B1=p/200, B2=p/10. It's certainly not as deep as for Mersenne, where large numbers of people have contributed to factoring. For Mersenne, the levels in the 10M range are typically TF=69 and P−1 to about B1=p/40, B2=p/2. However, even if we did have TF and P−1 up to Mersenne levels, it would only eliminate a few percent of the remaining exponents, surely less than 10%. Finding factors gets exponentially harder at larger sizes, and most factors will simply remain out of reach. So one way or another, there's no way to avoid doing most of those PRP tests. Progress on Mersenne is faster only because the work is split up among a much larger number of contributors. However, with 2048-bit residues, if you do a PRP test and then a factor is found later, you can do a very quick Gerbicz cofactor-compositeness test on the new cofactor. So the PRP test is not wasted because at least there is a small chance of discovering a new very large PRP. I find that with even a simple implementation using GMP, a Gerbicz cofactor-compositeness test is about 50 times faster than a PRP cofactor test using the latest mprime AVX-512 implementation. However, note that the Gerbicz test only removes the need to keep redoing PRP tests of new cofactors every time a new factor is discovered; you still have to do one initial PRP test and record the 2048-bit residue, because the Gerbicz test needs that 2048-bit residue as input. |
[QUOTE=GP2;513047]In the 10M range, for the remaining unfactored exponents, TF was done to 66 bits by axn, and P−1 was done by me to B1=p/200, B2=p/10.[/QUOTE]
I should say, TF was done from 64 to 66 by axn, and below that by ATH and others. And below 64 bits, the factoring for each exponent stopped whenever a first factor was found, so there are small secondary factors remaining to be found. If we look at the [URL="https://www.mersenne.org/primenet/"]Mersenne work distribution map[/URL], as of today the line for the 10M range shows: [CODE] 10000000 61938 | 40593 [B]21345[/B] [/CODE] So for Mersenne, there are 21,345 unfactored exponents in the 10M range. For Wagstaff, there are currently 22,248 unfactored exponents in the 10M range. And the 10.2M subset contains 2206 of them, very close to 10%. So based on that, if we did factor Wagstaff exponents as thoroughly as Mersenne, we'd only find factors for about 4% of the currently unfactored Wagstaff exponents in the 10M range. As you know, factoring gets exponentially harder as you increase bit-length (for TF) or non-smoothness (for P−1). For any exponential curve, there is only a very narrow transition zone where you go from "incredibly tiny" to "impossibly large". The overwhelming majority of exponents are either trivial to factor or impossible to factor. All the years of efforts of Primenet and all the GHz-days thrown at TF and P−1 actually only made a difference for a very small subset of exponents. But of course, it's impossible to know in advance which exponents those are. |
Apologies but releasing my range. No way I’ll commit my laptop for more than one year on this.
|
I, too, bit off a little more than I expected; in my case, it'll take me a month to free up a few cores, and then ~3 months to do the work. I'll get mprime going on one core in a few days, and then 3-5 more in May (sadly, not all on one machine). Carlos, why don't we share one 100k range for 3 months or so, e.g. you do 10k and I do 90k?
|
[QUOTE=pinhodecarlos;513106]Apologies but releasing my range. No way I’ll commit my laptop for more than one year on this.[/QUOTE]
[QUOTE=VBCurtis;513110]I, too, bit off a little more than I expected; in my case, it'll take me a month to free up a few cores, and then ~3 months to do the work.[/QUOTE] Not a problem. Two thousand exponents is a very large number, even for relatively low exponent ranges. Currently I don't have any setup for automated assignment of individual exponents. Maybe there's some way to adapt it as a BOINC project, but I have no idea how to go about doing that. At some point, maybe a few months from now, I will resume my own testing using cloud resources. |
1 Attachment(s)
[QUOTE=GP2;513117]Not a problem. Two thousand exponents is a very large number, even for relatively low exponent ranges.
Currently I don't have any setup for automated assignment of individual exponents. Maybe there's some way to adapt it as a BOINC project, but I have no idea how to go about doing that. At some point, maybe a few months from now, I will resume my own testing using cloud resources.[/QUOTE] Would you like to try [URL]https://boinc.tacc.utexas.edu/[/URL] ? Attached my tested numbers. |
[QUOTE=GP2;513117]Maybe there's some way to adapt it as a BOINC project, but I have no idea how to go about doing that.[/QUOTE]
You would need to set up a boinc server with an address to give to users (this part is easy/straightforward) and then write a wrapper application because I don't think mprime has boinc capabilities in it. It's basically a very simple program that translates boinc calls to run a task into actual setup needed to launch the test and then collect its result. Here's [URL="https://github.com/ibethune/llr_wrapper/"]an example[/URL] of the llr wrapper used in PrimeGrid. Then setting up that application in your server is a matter of editing xmls. Then you need to write a validator that checks the results. Decide if you maybe want to have double checking and to have validator compare residues from two tests. Oh, and write work generation scripts or software. A lot of fun if you're a programmer! It's very preferable to be familiar with php and mysql because you'll likely have to deal with them for various tasks. |
hi,
a boinc project would be very nice! this is not an easy thing you could look at [url]http://srbase.my-firewall.org/sr5/[/url] [url]http://srbase.my-firewall.org/sr5/download/srbase-guide.pdf[/url] they use llr with a wrapper that comes with the boinc server software (not sure about this) but if it is so you do not have to develop your own wrapper or a native-boinc-integrated program |
| All times are UTC. The time now is 23:27. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.