![]() |
You need to list all known factors as the last parameter, in " ".
Otherwise you will keep finding the easy know factors and their products. |
[QUOTE=Batalov;381835]You need to list all known factors as the last parameter, in " ".
Otherwise you will keep finding the easy know factors and their products.[/QUOTE] Ahh, I see. Thanks! |
I've read the ECM page on the Mersenne Wiki but I don't quite understand how the collaborated aspect works.
If I wanted to throw some computing power at the ECM of this exponent, how would I go about it without repeating a bunch of work? I get that there are a bunch of curves with different parameters defining them, and that they're "run" one at a time, but how do we keep track of which ones are done? |
[QUOTE=TheMawn;381859]I've read the ECM page on the Mersenne Wiki but I don't quite understand how the collaborated aspect works.
If I wanted to throw some computing power at the ECM of this exponent, how would I go about it without repeating a bunch of work? I get that there are a bunch of curves with different parameters defining them, and that they're "run" one at a time, but how do we keep track of which ones are done?[/QUOTE] You just do the same thing/curve over again. I think what makes it different/random is the random sigma? Dunno... I should probably look into the math of it. -Possibly Stupid/Wrong |
[QUOTE=TheMawn;381859]I've read the ECM page on the Mersenne Wiki but I don't quite understand how the collaborated aspect works.
If I wanted to throw some computing power at the ECM of this exponent, how would I go about it without repeating a bunch of work? I get that there are a bunch of curves with different parameters defining them, and that they're "run" one at a time, but how do we keep track of which ones are done?[/QUOTE] I've been wondering that too. Prime95's URL earlier in the thread is helpful to keep track of the collective watermarks but how it is calculated I'm not entiry sure. I suppose it's based on the sigma used in each test which I presume is randomized and perhaps impossible to hit all of them. The number of curves per mark may be the probability point where it's time to move on to the next tier. Hopefully someone who knows can explain it. [url]http://www.mersenne.org/report_ecm/default.php?txt=0&ecm_lo=1&ecm_hi=1&ecmnof_lo=7508981&ecmnof_hi=7508981[/url] |
[QUOTE=pdazzl;381863]I've been wondering that too. Prime95's URL earlier in the thread is helpful to keep track of the collective watermarks but how it is calculated I'm not entiry sure. I suppose it's based on the sigma used in each test which I presume is randomized and perhaps impossible to hit all of them. The number of curves per mark may be the probability point where it's time to move on to the next tier. Hopefully someone who knows can explain it.
[URL]http://www.mersenne.org/report_ecm/default.php?txt=0&ecm_lo=1&ecm_hi=1&ecmnof_lo=7508981&ecmnof_hi=7508981[/URL][/QUOTE] I've asked a similar question myself some time ago, and it was explained very clearly to me here: [URL]http://mersenneforum.org/showthread.php?t=18544[/URL] Each curve has a random sigma, so you can run dozens in parallel. For instance for M7508981 running XXX number of curves with a B1=YYYY , you would expect to have found a ZZ digit factor at least once (if there is one). The chance of missing a ZZ digit factor with that number of curves/B1 value is 1/e (~37%). Curves with a high B1 value take more time/computational resources, so the idea is you look for increasingly bigger factors as to not 'waste' resources on running curves with B1=millions and only finding very small factors. For instance: 1 curve with B=50,000 takes only 0.3316 GHzdays 1 curve with B=110,000,000 takes a massive 729.482 GHzdays But as pointed out earlier, the server can convert the effort with non standard B1 values, so if everyone reports their results to Primenet we don't have to worry about keeping track of the number of curves/B1 values ourselves. [QUOTE=Prime95;381821]If you report your results to Primenet, then the server will convert all effort to the standardized B1 values. See [URL]http://www.mersenne.org/report_ecm/default.php?txt=0&ecm_lo=1&ecm_hi=1&ecmnof_lo=7508981&ecmnof_hi=7508981[/URL] There is nothing wrong with B1=800K-1M, nor is there any problem with jumping straight to 1M.[/QUOTE] Edit: I'm doing a few curves with B=1,000,000 |
[QUOTE=TheMawn;381859]I get that there are a bunch of curves with different parameters defining them, and that they're "run" one at a time,[/QUOTE]
Quite right. The parameter that allows you to run many similar curves is called the sigma. This is a randomly-chosen value. (this is tangential to your question, and may be something you already know, but is possibly interesting:) For each potential factor p, there is a group order that can be (via some [URL="http://www.mersenneforum.org/showpost.php?p=56055&postcount=7"]complex calculations[/URL], but trivial for a computer) calculated from p and sigma. The group order is within [TEX]sqrt{p}[/TEX] of p, so it's always roughly the same size. If this group order's factorization meets the B1 and B2 bounds that you're running at, you'll find the factor. You might notice that this is similar to P-1 factoring, where if p-1's factorization meets the B1 and B2 bounds, you'll find the factor. But instead of only having one shot at it, you have a near-limitless number of different group orders that can be chosen for each p, via the sigma. This is the upside to ECM. [QUOTE=TheMawn;381859]but how do we keep track of which ones are done?[/QUOTE] The sigma is chosen from a large enough range that collisions are rare, so we don't have to keep track of which sigmas have already been run. We could choose sigmas sequentially, but by choosing the sigmas randomly, we allow this easy parallel collaboration. |
[QUOTE=Batalov;381715]...or maybe I just like the suspense and submit only one factor per day? :rolleyes:[/QUOTE]
@Batalov Are you still ecm'ing 9100919? Is it fair game for folks to test as well? Maybe we can have a dueling banjo session to see which one gets to 12 factors first :) [url]http://youtu.be/4gw0fxuIvBM[/url] |
I'm wondering on sigma values. Recently I have been working M25xxx and in the first few batches I was not aware that, for several hundred worktodo lines, there were only twelve different exponents! I found a factor, and later found another one, but on submission the server rejected it as already found. Looking this up on mersenne.ca, it was found by myself!
I then put all the entries for new work in a spreadsheet, sorted them, combined the number of curves into a single assignment and then paste to worktodo. The reasoning is that, in a single run (of several thousand curves), the client will not repeat sigma, whereas with fifteen or twenty assignments of the same exponent (manual system hands out 150 curves per assignment), duplication becomes possible. I hope I'm understanding this correctly. If we are doing mass work on this, then it makes sense to run as many curves as required for the bounds, all in one discrete sequence, to avoid duplicating possible sigma values. |
There should be enough sigma values(quite possibly 2^32 or some such number) that no overlaps should happen. Multiple sigma values will find the same factor. Maybe reducing the number of curves run in each batch and reporting them to the server more often would be nice. In this case I don't think you are wanting to stop after a factor is found anyway. Finding a factor a few times doesn't reduce the chance of finding another factor. Occasionally two factors will get found at once.
|
Okay, this helps a lot. Thanks.
Increasing the B1 increases the maximum size of the factors which can be found, yes? Does that mean we start with smaller B1's for the sake of quickly finding some small factors but we eventually go bigger to find the bigger ones? EDIT: As a followup question, I would ask if that means that after a significant amount of, say, B1 = 10,000,000, doing B1 = 1,000,000 is pointless? |
| All times are UTC. The time now is 15:40. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.