mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   PrimeNet (https://www.mersenneforum.org/forumdisplay.php?f=11)
-   -   'twas brillig (translation: 'not needed') (https://www.mersenneforum.org/showthread.php?t=20683)

dbaugh 2015-11-21 22:24

'twas brillig (translation: 'not needed')
 
I had trial factored 12827821 from 2^64 through 2^78. ANONYMOUS had trial factored up to 2^64. No factors had been found.

2^78 to 2^79, 2^79 to 2^80 and 2^80 to 2^81 were being trial factored on three different machines. A factor was found and reported in the 2^80 to 2^81 range. PrimeNet will not allow the two lower ranges which are now completed to be reported as no-factor. If you look up the exponent it appears as though 2^78 to 2^79 and 2^79 to 2^80 have not been searched for factors. This is not true.

Even if there is no credit available for searching for additional factors of exponents for which a factor has been found, there should be a way to report that the ranges have been searched. What do I do now?

snme2pm1 2015-11-21 23:44

[QUOTE=dbaugh;416793]2^78 to 2^79, 2^79 to 2^80 and 2^80 to 2^81 were being trial factored on three different machines. A factor was found and reported in the 2^80 to 2^81 range.[/QUOTE]
The hazard arose right there, especially if results were automatically submitted to PrimeNet.
[QUOTE=dbaugh;416793]PrimeNet will not allow the two lower ranges which are now completed to be reported as no-factor. ... What do I do now?[/QUOTE]
You might petition George with the details of the work, though I don't know if there is a convenient mechanism to introduce such results.

kladner 2015-11-22 02:35

It is safest to make sure that TF results get submitted in order. MISFIT can do this for you, [U]if you run Windows.[/U] IIRC, people use it to coordinate multiple machines. I use it to run multiple GPUs in the same system.

In the Configuration Editor, there is an option to NOT export 'partial results.' The entire range assigned must be complete before MISFIT will submit it, and it will not send them out of order.

EDIT: Sorry for providing useless information, if you are not a Windows user. :wink:

axn 2015-11-22 03:41

[QUOTE=snme2pm1;416804]You might petition George with the details of the work[/QUOTE]
+1

LaurV 2015-11-22 10:14

It smells like a p-1 result (B1=29k, B2=3M6 would find it in minutes). I won't give TF credit for lower bits. Sorry.
We already had too many guys here trying to get lots of TF credit for P-1 factors.
You may be a honest guy, but I won't believe you went to days/weeks of TF before trying minutes of P-1 - you were silly in this case, but if that is the case indeed, you should have some other work done and reported, and eventually factors found in this range, at these bitlevels. Have you?

dbaugh 2015-11-22 16:15

More ignorant than silly. I do not understand p-1 and how it can be used to clear a bit range of factors. My main interest is that there is a record that these bit ranges have been exhaustively searched and there are no factors there. Here is a small selection of some other exponents I have searched at these bit levels and beyond with no factors found: 9007753, 9007903, 9027433.

If p-1 is so quick, I asked a couple of years ago for a factor of 9007753. Please find one for me.

manfred4 2015-11-22 16:20

Another question first: how long did it take you to TF 9007753 to 82 bits?

dbaugh 2015-11-22 16:30

I started at 68 to 69 four years ago, so I do not know the total time spent. It took just over two and a half months to do 81 to 82.

dbaugh 2015-11-22 20:59

How does one calculate B1 and B2 from exponent and factor? I saw another post where you said a guys B2 was too big to be p-1. People were questioning his paucity of factors from TFing. It turned out his TF factors were being recorded as p-1. That was happening to me regularly a year ago but not this time.

LaurV 2015-11-23 02:49

[QUOTE=dbaugh;416895]How does one calculate B1 and B2 from exponent and factor? I saw another post where you said a guys B2 was too big to be p-1. People were questioning his paucity of factors from TFing. It turned out his TF factors were being recorded as p-1. That was happening to me regularly a year ago but not this time.[/QUOTE]
This is a good and practical question, so lets answer it:

You have to factor q-1, where q is your found factor. As all the factors are of the form 2*k*p+1, then your q-1 is 2*k*p for some positive k. You take out p and 2 from the list of the factors, and what is left is your k.
The P-1 factoring algorithm would find that factor q if either (1) B1 is larger than the largest factor of k (or equal), or (2) if B1 is larger (or equal) than the second largest factor of k and B2 is larger than the largest (or equal, technically, B2 is larger than the largest minus something, depending of your memory amount).

Which one of the two, (1) or (2) would find the factor faster depends on the difference between the highest and second highest factors of k, and depends also of how much memory you give to P-1 program. More memory makes stage 2 of the algorithm faster (in practice, it will take the same time as stage 1, it just goes deeper, using a higher B2 and increasing your chances to find a factor, but that is different story).

Example for your factor (we use pari/gp to factor such small numbers):

[CODE]gp > factorint(1426192839661371189084169-1)
time = 99 ms.
[ 2 3]
[ 3 1]
[ 43 1]
[ 1069 1]
[ 28493 1]
[ 3536957 1]
[[COLOR=Purple]12827821 [/COLOR]1]
gp >[/CODE]So, your q-1 factors like 2[SUP][COLOR=Magenta]3[/COLOR][/SUP]*3*43*1069*28493*3536957*[COLOR=Purple]12827821[/COLOR], from which we take out one 2 and the "p" (your exponent, marked purple) , and we get your k being k=2[SUP][COLOR=Magenta]2[/COLOR][/SUP]*3*43*1069*[COLOR=Blue]28493[/COLOR]*[COLOR=Red]3536957[/COLOR]

So, (1) Running P-1 with B1 higher than (or equal to) 3536957 would find the factor q in stage 1.
Or, (2) Running P-1 with B1>=28493 and B2>=3536957 would find your factor in stage 2.

For this particular case, the factor would be found faster with a B1~=30~40k, and a B2~=3M6, but those can vary according with your memory allocation. Generally, we use B2~=100*B1, because stage 2 is about 100 times faster than stage 1, and the best result is when the algorithm spends the same amount of time in each stage.

Anyhow, [B][U]you should always run P-1 to some limits before attempting such Long-Time TF jobs.[/U][/B] You just wasted a lot of your time and resources, when you could do other things.

For this particular case, P-1 would be a few minutes job, with a "right guess" of B1 and B2. But as the "right guess" we never do, and we just use generic B1 and B2, you could spent maximum a few hours for this factor.

OTOH, I still think you found this factor by P-1 or ECM. I see you are running a lot of TF [U]and ECM[/U] in that range, so you are not exactly innocent there :razz: Therefore it seems to me you know what you are doing, and you usually stop at 75-76 bits, so my next question would be why you decided to go to 81-82 bits for this very particular exponent?

I may be totally wrong there, and you may be a honest guy, you know... but as I said, we have a lot of guys (and guls, hehe) here trying to take advantage of the system, and sooner or later we catch them. The system is not fool-proof, it is designed for people who want to contribute and do real, honest work. If you don't know, then you better ask. We will answer the questions if we know the answer, and someone will help you to use your resources better for you and the project. There is no shame to ask, we even appreciate it, we say, hey, look, a guy who wants to learn - those are rare...

dbaugh 2015-11-23 05:07

Great tutorial! Of the 400+ LL's I have done, 9007753 was my one bad residue. I would like to redeem myself with this exponent by finding a factor of it. I do high TF-ing of small exponents because with my GPUs I can get a lot of credit fast. My original ID (DB11) from 1997 did not migrate and I lost 317 LLs. The first was 2194013. I am revisiting those exponents and ones near them with TF.

I have used ECM to do what you suggest I use P-1 for. That is, find factors so I do not need to TF.
e.g., 5551699, 5592527, 9027611, 11003933.

Here is a copy of the results file which includes the factor you seem to find of questionable provenance. It was finding that factor that inspired me to catch up on my submissions.

[Sun May 31 13:10:10 2015]
no factor for M11003939 from 2^78 to 2^79 [mfaktc 0.21 barrett87_mul32_gs]
[Thu Jun 18 06:01:49 2015]
no factor for M11003939 from 2^79 to 2^80 [mfaktc 0.21 barrett87_mul32_gs]
[Tue Jul 28 09:22:56 2015]
no factor for M9007903 from 2^80 to 2^81 [mfaktc 0.21 barrett87_mul32_gs]
[Sun Sep 06 13:29:03 2015]
no factor for M9027433 from 2^80 to 2^81 [mfaktc 0.21 barrett87_mul32_gs]
[Sun Oct 11 00:18:40 2015]
no factor for M11003939 from 2^80 to 2^81 [mfaktc 0.21 barrett87_mul32_gs]
[Mon Nov 09 13:27:31 2015]
M12827821 has a factor: 1426192839661371189084169 [TF:80:81:mfaktc 0.21 barrett87_mul32_gs]
[Mon Nov 09 17:09:41 2015]
found 1 factor for M12827821 from 2^80 to 2^81 [mfaktc 0.21 barrett87_mul32_gs]

Again, my real issue is that if you look at the exponent status of 12827821 it appears as though the range from 2^78 to 2^80 has not been searched. It has. If a bigger factor has been found it should still be possible to document cleared ranges even if no credit is given. If I had submitted the results in a different order everything would have been fine.

LaurV 2015-11-23 05:48

[QUOTE=dbaugh;416929] I do high TF-ing of small exponents because with my GPUs I can get a lot of credit fast
[/QUOTE]
This is another point where you should "ask first", if the credit is what you are after, you can get actually [U]more[/U] credit by factoring at the LL and DC fronts, than factoring "at extremes" (like too low or too high exponent, too high bitlevel), because mfaktX is "tuned" for those ranges. For example, TF-ing 70M to 74 bits gives you an about 450 GHzDays/Day on a GTX580, but TF-ing a 350M exponent to 72 or 73 bits only gives you 380-399 GHzD/D. I don't have the scores for lower exponents right now, but they also give lower credits for sure.

[QUOTE]
. My original ID (DB11) from 1997 did not migrate and I lost 317 LLs.
[/QUOTE]This is a very simple problem to solve for George, you just drop him a PM here (his user is "prime95", he doesn't waste his time to read all these arguments here) with some request and some characteristic of the work you did, and it will be no problem for him to update your scores. He is a very understanding guy.

[QUOTE]
The first was 2194013. I am revisiting those exponents and ones near them with TF.
[/QUOTE]Let it go. Working at LL/DC fronts gives you more credits, completes the assignments faster, and you will help the project more. Trust me!

[QUOTE]
[Mon Nov 09 13:27:31 2015]
M12827821 has a factor: 1426192839661371189084169 [TF:80:81:mfaktc 0.21 barrett87_mul32_gs]
[Mon Nov 09 17:09:41 2015]
found 1 factor for M12827821 from 2^80 to 2^81 [mfaktc 0.21 barrett87_mul32_gs]
[/QUOTE]This seems very legit to me. When you have "when a factor is found, stop after class", it may take 6 hours (or more, depending on your card) for a class, and the time coincides with what my cards could do. I have no argument here.

Why did I say that "I would not give credit for the lower bits" was because, I assumed (wrongly, it seems) that this factor was found by P-1, and [B][U]if[/U][/B] his factor was found by P-1, than there is no warranty that other smaller factors do not exist. P-1 does [B][U]not[/U][/B] find the smaller factor, but it finds the [U]smoothest[/U] factor. If a 81 bits factor is found by P-1, there could be other, undiscovered, smaller factors, which were not so smooth to be P-1 findable.

dbaugh 2015-11-23 09:53

I am glad I was able to put your mind at ease.

When I brought the disappeared id up with George four years ago, his final word was, "There isn't a way to link a v5 account to results that were manually submitted in the v4 era. "

It is just the issue of smaller undiscovered factors that I am concerned about. I know that 12827821 has been trial factored for all factors less than 2^81 because I did it. With the current submission methodology this is not discernable from its exponent status. From the exponent status it is unclear that there are no possible factors in the range of 2^78 to 2^80. Even if someone finds a 100 bit factor there needs to be a way to report that smaller factors have been excluded.

LaurV 2015-11-23 10:46

[QUOTE=dbaugh;416868]Here is a small selection of some other exponents I have searched at these bit levels and beyond with no factors found: 9007753, 9007903, 9027433.
If p-1 is so quick, I asked a couple of years ago for a factor of 9007753. Please find one for me.[/QUOTE]
I missed that post, due to the other one I replied to.
Ignoring the last sentence, willingly aggressive (or ironical), you did not convince me.
For the three exponents you give as example, you followed the "book procedure" very exactly, which means you know exactly what you are doing. You did the [U]right[/U] amount of P-1 in the [U]right[/U] place (i.e. after the [U]right[/U] bitlevels). [edit: [URL="http://www.mersenne.org/report_exponent/?exp_lo=9007753&exp_hi=&full=1"]example[/URL]]

If you followed the same procedure for the 12M exponent, then more than sure you found the factor with P-1.

I think George had his reasons. Sorry.
I am out of this subject.

alpertron 2015-11-23 12:37

[QUOTE=dbaugh;416868]If p-1 is so quick, I asked a couple of years ago for a factor of 9007753. Please find one for me.[/QUOTE]
According to the [URL="http://www.mersenne.org/report_ecm/?txt=0&ecm_lo=9007753&ecm_hi=9007753&ecmnof_lo=1&ecmnof_hi=1"]ECM report for M9007753[/URL], its prime factors have more than 35 digits (116 bits) with high probability. This means that no TF must be done for this number, and P-1 will not be useful.

Madpoo 2015-11-23 17:04

[QUOTE=dbaugh;416951]I am glad I was able to put your mind at ease.

When I brought the disappeared id up with George four years ago, his final word was, "There isn't a way to link a v5 account to results that were manually submitted in the v4 era. "

It is just the issue of smaller undiscovered factors that I am concerned about. I know that 12827821 has been trial factored for all factors less than 2^81 because I did it. With the current submission methodology this is not discernable from its exponent status. From the exponent status it is unclear that there are no possible factors in the range of 2^78 to 2^80. Even if someone finds a 100 bit factor there needs to be a way to report that smaller factors have been excluded.[/QUOTE]

I think that *typically* someone doing TF work will do it in order, going from smaller bit ranges to higher ones. I think it's safe to assume that in nearly all cases, if a factor was found by TF at some bit level, everything below that was also TF'd.

There are bound to be oddball cases, and then jerks that have been caught finding a factor by P-1 and then "helpfully" submitting "no factor found by TF" results for anything below where they found their factor. It's people like that that make some of us a bit skeptical of cases where a factor should have been easily found by P-1 and then someone claims to have found it by exhaustive TF work, thus getting more credit than they would by merely submitting the P-1 result in the first place. For some people, placing higher on the stats list is apparently worth cheating to get there. Weird.

Anyway, while there are some who like to make sure all the exponents are properly TF'd all the way through, you'll probably find that in most cases, if a factor is found it's not really important anymore if everything up to that was also tried. For some specific numbers or where they have "special meaning" in some way to some one (like you), it may matter more, but in general, a factor means it's composite, move on to the next one. :smile:

Madpoo 2015-11-23 17:07

[QUOTE=dbaugh;416951]When I brought the disappeared id up with George four years ago, his final word was, "There isn't a way to link a v5 account to results that were manually submitted in the v4 era. "[/QUOTE]

I just double-checked and I couldn't find your v4 account either, so George was correct, there's probably nothing to be done for those <= 1997 submissions of yours.

dbaugh 2015-11-23 17:45

LaurV, I am surprised and saddened that you still think I cheated on finding that factor. I know otherwise. TF will find every factor that P-1 will if you TF to a high enough bit level. It does not care that P-1 would have found it with less work. If a high number of my TF searches found big factors I would understand your suspicion but, I have done plenty of high bit level work on other exponents without finding a factor. Try to be less cynical.

DB11 may not be out there but, my name is still associated with those exponents. A search could be done for that name and merged with dbaugh. Or, I could provide a file of all those old results. It is 543 total records. It is not that nothing can be done about migrating this work, it just is not important enough to do it

Thanks everyone, for your feedback and insights. I'll just keep soldiering on as I have for the past 18 years.

Onwards and upwards.

Batalov 2015-11-23 21:49

I think the appropriate metaphor for searching factors could be playing blackjack - in the sense that one may have happened to observe the play for a while and idly [URL="https://en.wikipedia.org/wiki/Card_counting"]counted cards[/URL] (as some people do professionally) and decided to join the game when you see that the table is hot, or change the table if the table is countably cold.

On the other hand, if you know that the table is as cold as [URL="http://www.ebay.com/itm/like/400963508627?ul_noapp=true&chn=ps&lpid=82"]the morgue table[/URL] (the exponent was ECMd to 116 bits), why would one waste their time to play the game (TF at 80+epsilon bits) with perhaps << 1% of the chance of average winning at the next, completely average table?! Just out of stubbornness? Which is fine, too. It is simply important to understand that one is doing exactly that.

Madpoo 2015-11-23 21:51

[QUOTE=dbaugh;416993]LaurV, I am surprised and saddened that you still think I cheated on finding that factor. I know otherwise. TF will find every factor that P-1 will if you TF to a high enough bit level. It does not care that P-1 would have found it with less work. If a high number of my TF searches found big factors I would understand your suspicion but, I have done plenty of high bit level work on other exponents without finding a factor. Try to be less cynical.[/QUOTE]

It's the internet where nobody really knows anyone. :smile:

Given some of the recent shenanigans with TF results I can understand where he's coming from as far as not trusting someone right away, especially if it seems a little strange.

Don't take it personally though. I think it was just a little strange (and you'd probably agree) to spend so much time taking particular exponents up to such a high TF level for no apparent reason. Now that you've explained why you're going after these exponents specifically, well, it's still a little weird, but I suppose I get it. :smile:

Like he said, you'd be better off doing different TF work if you were interested in more credit, but as always, we'd encourage people to participate in any way they think is the most fun. I don't care about credit or ghz/days myself and I've spent a lot of CPU resources on what some would consider wild goose chases, but I enjoyed it.

dbaugh 2015-11-23 22:29

Surprisingly, this is not the first time weird and I have been in the same room. ;) I guess the takeaway from all of this is that I should stop my TF run on 9007753 from 2^82 to 2^83. That one bad residue is never going away even if I find a factor. I also need to be more careful about the order in which I submit results. I use my CPUs mainly for adding to OEIS. TF on my GPUs has such a high cost/benefit that it is irresistible to me. I should probably start looking for factors of non-LLed exponents. If LL ever gets really fast on a GPU, watch out.

Batalov 2015-11-23 22:39

1 Attachment(s)
[QUOTE=dbaugh;417047]That one bad residue is never going away even if I find a factor.[/QUOTE]
Right! Anyone who didn't have a bad residue... ...

Well, like the old saying --

Madpoo 2015-11-24 03:54

[QUOTE=Batalov;417048]Right! Anyone who didn't have a bad residue... ...[/QUOTE]

Discounting some bad residues in my history that I can chalk up to Prime95 errors (shift count being with a certain proximity of the exponent itself, for instance), I still have some in the distant past that ended up bad, which I can only attribute to machine error.

All told I think there are about 27 bad ones in my past (mostly from the v4 days), like this one:
[URL="http://www.mersenne.org/M1843147"]http://www.mersenne.org/M1843147[/URL]

Curiously that one got retested by me recently when I triple-checked everything below 2M. :smile:

EDIT: Kind of funny looking back at the bad ones... some were my crappy home machines, some were from my brother and a friend of mine, and surprisingly two of them came from the (infamous) US WEST machines. Weird.

dbaugh 2015-11-25 08:49

Here is a screen scrape of the command window where I found the 80 bit factor of 12827821 using TF.

Date Time | class Pct | time ETA | GHz-d/day Sieve Wait
Nov 09 03:05 | 4524 98.0% | 2664.8 14h03m | 644.70 82485 n.a.%
Nov 09 03:50 | 4535 98.1% | 2665.0 13h19m | 644.66 82485 n.a.%
Nov 09 04:34 | 4536 98.2% | 2665.2 12h35m | 644.60 82485 n.a.%
Nov 09 05:18 | 4539 98.3% | 2664.8 11h50m | 644.70 82485 n.a.%
Nov 09 06:03 | 4544 98.4% | 2664.7 11h06m | 644.71 82485 n.a.%
Nov 09 06:47 | 4548 98.5% | 2664.8 10h21m | 644.69 82485 n.a.%
Nov 09 07:32 | 4551 98.6% | 2664.8 9h37m | 644.69 82485 n.a.%
Nov 09 08:16 | 4556 98.8% | 2664.7 8h52m | 644.72 82485 n.a.%
Nov 09 09:00 | 4559 98.9% | 2665.0 8h08m | 644.64 82485 n.a.%
Nov 09 09:45 | 4560 99.0% | 2665.2 7h24m | 644.60 82485 n.a.%
Nov 09 10:29 | 4563 99.1% | 2665.6 6h39m | 644.50 82485 n.a.%
Nov 09 11:14 | 4571 99.2% | 2666.1 5h55m | 644.39 82485 n.a.%
Nov 09 11:58 | 4580 99.3% | 2665.2 5h10m | 644.59 82485 n.a.%
Nov 09 12:43 | 4583 99.4% | 2666.4 4h26m | 644.30 82485 n.a.%
Nov 09 13:27 | 4584 99.5% | 2666.0 3h42m | 644.40 82485 n.a.%
M12827821 has a factor: 1426192839661371189084169
Nov 09 14:11 | 4595 99.6% | 2666.1 2h57m | 644.39 82485 n.a.%
Nov 09 14:56 | 4599 99.7% | 2665.6 2h13m | 644.50 82485 n.a.%
Nov 09 15:40 | 4604 99.8% | 2665.6 1h28m | 644.50 82485 n.a.%
Nov 09 16:25 | 4611 99.9% | 2665.8 44m26s | 644.46 82485 n.a.%
Nov 09 17:09 | 4616 100.0% | 2665.9 0m00s | 644.43 82485 n.a.%
found 1 factor for M12827821 from 2^80 to 2^81 [mfaktc 0.21 barrett87_mul32_gs]
tf(): total time spent: 29d 17h 52m 0.112s

ERROR: get_next_assignment(): no valid assignment found in "worktodo.txt"

C:\David\mfaktc>

It shows something I did not expect. Even though the factor is 80.24 bits long, it was not found until very near the end of the 2^80 to 2^81 run. I naively would have expected it in the first quarter.

tha 2015-11-25 09:14

[QUOTE=dbaugh;417221]Here is a screen scrape of the command window where I found the 80 bit factor of 12827821 using TF.
C:\David\mfaktc>

It shows something I did not expect. Even though the factor is 80.24 bits long, it was not found until very near the end of the 2^80 to 2^81 run. I naively would have expected it in the first quarter.[/QUOTE]

Trial factoring ploughs through all work one by one. P-1 does all the work in one run first and then sieves a result, if there is one, out of all the work in the second run, which takes only seconds.

dbaugh 2015-11-25 09:47

What I was trying to say is that I expected the factors being tried in the final classes to be near the upper limit of the bit range being tested when using mfaktc. This does not appear to be the case. TF may do the work one by one but not in the naïve order.

bloodIce 2015-11-25 13:10

The classes have nothing to do with the bitsize of a factor. There are implemented for faster sieving, but in each class you have pretty much the same bit-length range. When you go through the candidates in a class you start from some value close to 2^81 and end close to 2^82, then you go to the next class and start again from 2^81 (for TF81-82). Well, theoretically the last class (4620) should have the largest candidate factor, and class 0 should have the candidate, which is the closest to the start, but the differences between the min and max candidates tested in each class are minute.

LaurV 2015-11-25 13:16

[QUOTE=dbaugh;417224]What I was trying to say is that I expected the factors being tried in the final classes to be near the upper limit of the bit range being tested when using mfaktc. This does not appear to be the case. TF may do the work one by one but not in the naïve order.[/QUOTE]

It doesn't work like that. One "class" is just the modularity class for k (mod 4620). Your k is (q-1)/(2p)=(1426192839661371189084169-1)/(2*12827821)=55589832429894804, which calculated (mod 4620) gives 4584. This is the class where you found the factor in. This is done to be able to eliminate as many candidates as possible from start. Considering that the factor q has to be prime, and it has to be 1 or 7 (mod 8), there are only 960 remaining classes from 4620, where the factors can be. So, even without any sieving, we would need to test only one in 5 candidates (i.e. 960 /4620) if they are factors or not.

Which 960 classes from 4620 we need to test, this depend on p, but they are always 960, differently distributed along the 4620 total classes. For example, your p is 12827821, which is 2701 (mod 4620).

If a number k is in class 4584 (as in your case, k=4584 (mod 4620)) then when we compute q (mod 4620) we will get q=2kp+1=2*4584*2701 (mod 2*4620) which is 8809 (mod 9240). This mean that it is 1 (mod 8) and gcd(8809, 4620)=1, so it can be prime, and it can be a factor so we need to test this class.

If a number k is in class (say) 1500, k=1500 (mod 4620), then when we compute q (mod 4620) we will get q=2kp+1=2*1500*2701 (mod 2*4620) which is 8760 (mod 9240). Now gcd(8760, 4620)>1, so q can not be prime, therefore we can skip all candidates in this class.

So, mfaktc uses modularity of p to 4620 to check which classes to keep and which to skip when looking for factors. If you factor at (say) bitlevel from 70 to 71, and your m=2^p-1 has a factor q in first tested class, it will be found first, no matter if it is close to 70 or to 71 bits. Viceversa, if the factor is in last class, it will be found last, no matter if it is small (close to 70 bits) or big (close to 71 bits).

This is how it works.

Again, your work seems legit. You spent 844 minutes to finish 19 classes, which would mean 42644 minutes for all 960 classes. That is 29 days, 14 hours, and few minutes. Your time is quite right, if your computer was sometime busier (like watching a movie) which extends the time. Also it seems that it worked without interruption for those 29 days, as there is no sign of resuming the work. Honestly I don't think you went to the trouble to falsify that, is a hell of work. This time I believe you really did that work. So yes, you convinced me this time. Now you should do something about it, because I don't know for how long... :razz:


All times are UTC. The time now is 10:22.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.