mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   YAFU (https://www.mersenneforum.org/forumdisplay.php?f=96)
-   -   Yafu (https://www.mersenneforum.org/showthread.php?t=10871)

Andi_HB 2011-04-18 19:06

Thank you Ben for your help!
It works now.

lorgix 2011-04-22 19:05

[CODE]prp32 = 15524804783950848781509427266277 (curve 3 stg2 B1=11000 sigma=2251079396 thread=0)[/CODE]What's that?

10metreh 2011-04-22 19:35

[QUOTE=lorgix;259340][code]prp32 = 15524804783950848781509427266277[/code][/QUOTE]

That factor is composite: it is 5334631446577 x 2910192567082106101. Not sure why it came up as a PRP because it isn't. A bug, perhaps?
What method does Yafu use to test for probable primality?

bsquared 2011-04-22 21:34

[QUOTE=lorgix;259340][CODE]prp32 = 15524804783950848781509427266277 (curve 3 stg2 B1=11000 sigma=2251079396 thread=0)[/CODE]What's that?[/QUOTE]

I'm not sure I understand your question. It looks like a line of factor.log indicating that a 32 digit probable prime was found by ECM in stage 2, using B1=11k, with the indicated sigma.

[QUOTE=10metreh;259345]That factor is composite: it is 5334631446577 x 2910192567082106101. Not sure why it came up as a PRP because it isn't. A bug, perhaps?
What method does Yafu use to test for probable primality?[/QUOTE]

I use 20 bases of Miller Rabin, which appears to be working fine:
[CODE]
>> isprime(15524804783950848781509427266277)

not prime

ans = 0

>>[/CODE]

Somehow the factor must have not had a prp check... I'll have to trace through the code to see if there is a path for factors to slip past the check.

I'm guessing both factors were found simultaneously - a 32 digit factor with B1=11k would be a little improbable.

mnh001 2011-04-26 23:51

Just wondering if there's a practical size limit on the numbers YAFU
can handle. Numbers of 135 digits or so I figure will take about a week
to factor. Obviously larger will take longer but is there any upper
limit to size?

bsquared 2011-04-27 03:58

[QUOTE=mnh001;259686]Just wondering if there's a practical size limit on the numbers YAFU
can handle. Numbers of 135 digits or so I figure will take about a week
to factor. Obviously larger will take longer but is there any upper
limit to size?[/QUOTE]


There is a hard stop at 150 digits. As far as I can recall, that limit is arbitrary. I probably put it there to stop people from trying to factor 512 bit RSA keys.

A few years ago I factored a c130 with yafu. It took about 6500 CPU hours over ~ 2 weeks on 50 cores (1.4 and 2.0 GHz opteron cores). With newer technology and a faster program, the same job would probably take half as long. Is your guess of a week for a c135 assuming the use of about 50 modern cores full time?

Note also that numbers bigger than 100 digits or so use increasingly poorer estimates for job parameters (size of sieving interval, size of factor base, etc). So basing c130+ run time estimates on extrapolations of run times of completed sub-100-digit jobs will likely be wildly optimistic.

Have I discouraged you enough yet :smile:

mnh001 2011-04-27 21:59

Discouraging indeed! :) No, my guess of a week was based on doing a 118 digit number in about 2.5 days with 2 threads. So that leaves the question then of what program to use when numbers get larger? Are there any compiled for windows?

On another note, as I write I'm factoring a 117 digit number and I thought it was getting near the end but here's what I see on the screen:

[code]
...
nfs: commencing msieve filtering
nfs: commencing lattice sieving over range: 3550000 - 3630000
nfs: commencing lattice sieving over range: 3470000 - 3550000
Warning: lowering FB_bound to 3549999.
Warning: lowering FB_bound to 3469999.
total yield: 659588, q=3550007 (0.00944 sec/rel)
total yield: 681816, q=3630019 (0.00965 sec/rel)
'cat' is not recognized as an internal or external command,
operable program or batch file.
'cat' is not recognized as an internal or external command,
operable program or batch file.
nfs: commencing msieve filtering
read 10M relations
nfs: commencing msieve linear algebra
linear algebra completed 273702 of 569849 dimensions (48.0%, ETA 0h18m)
linear algebra completed 122005 of 569849 dimensions (21.4%, ETA 0h27m)
linear algebra completed 131094 of 569849 dimensions (23.0%, ETA 0h27m)
linear algebra completed 17718 of 569849 dimensions (3.1%, ETA 0h34m)
linear algebra completed 633530 of 569849 dimensions (111.2%, ETA 4572h16m)
linear algebra completed 282908 of 569849 dimensions (49.6%, ETA 0h18m)
linear algebra completed 33965 of 569849 dimensions (6.0%, ETA 0h36m)
[\code]

Is something like this normal? It's still running but I don't know if it's for real or in an error loop.

bsquared 2011-04-27 23:02

[QUOTE=mnh001;259769]Discouraging indeed! :) No, my guess of a week was based on doing a 118 digit number in about 2.5 days with 2 threads. So that leaves the question then of what program to use when numbers get larger? Are there any compiled for windows?

On another note, as I write I'm factoring a 117 digit number and I thought it was getting near the end but here's what I see on the screen:

[code]
...
nfs: commencing msieve filtering
nfs: commencing lattice sieving over range: 3550000 - 3630000
nfs: commencing lattice sieving over range: 3470000 - 3550000
Warning: lowering FB_bound to 3549999.
Warning: lowering FB_bound to 3469999.
total yield: 659588, q=3550007 (0.00944 sec/rel)
total yield: 681816, q=3630019 (0.00965 sec/rel)
'cat' is not recognized as an internal or external command,
operable program or batch file.
'cat' is not recognized as an internal or external command,
operable program or batch file.
nfs: commencing msieve filtering
read 10M relations
nfs: commencing msieve linear algebra
linear algebra completed 273702 of 569849 dimensions (48.0%, ETA 0h18m)
linear algebra completed 122005 of 569849 dimensions (21.4%, ETA 0h27m)
linear algebra completed 131094 of 569849 dimensions (23.0%, ETA 0h27m)
linear algebra completed 17718 of 569849 dimensions (3.1%, ETA 0h34m)
linear algebra completed 633530 of 569849 dimensions (111.2%, ETA 4572h16m)
linear algebra completed 282908 of 569849 dimensions (49.6%, ETA 0h18m)
linear algebra completed 33965 of 569849 dimensions (6.0%, ETA 0h36m)
[\code]

Is something like this normal? It's still running but I don't know if it's for real or in an error loop.[/QUOTE]

Ah, we're talking apples and oranges. I assumed you were talking about quadratic sieve factorizations, not NFS. Disregard my previous post.

There isn't any built in limit to the size of NFS jobs. Time approximately doubles every additional 6 digits, if I recall the rule of thumb correctly.

I'm not sure about the linear algebra problem you posted. I'll have to get back to you on that...

mnh001 2011-04-28 22:12

I let it run for another half dozen loops then got tired of it. So I ctrl-c'ed it, wiped out the yafu created files and tried again. It finished up today just fine. :) Maybe I should wipe out the created files every time to avoid glitches like this in the future.

Batalov 2011-04-29 07:53

[QUOTE=bsquared;259773]...Time approximately doubles every additional 6 digits, if I recall the rule of thumb correctly.[/QUOTE]
Every 5 digits (and [URL="http://homepage2.nifty.com/m_kamada/math/snfs_time.png"]for a SNFS job[/URL], every 9 digits), in higher sizes; but indeed in lower sizes, can be closer to every 6 digits. M.Kamada's [URL="http://homepage2.nifty.com/m_kamada/math/gnfs_time.png"]statistics[/URL]* suggest roughly 6, but observe that the [URL="http://homepage2.nifty.com/m_kamada/math/gnfs_time.png"]theoretical (L(1/3,c)) curve[/URL] loses the slope as it leaves the graph area; it is better fit with K*10^d/16.8 in the 140-180-digit GNFS range (which means doubling at 16.8*[I]lg[/I]2 digits ~= every 5 digits).

_____
*These stats are not normalized to the differences in the contributor's computers

bsquared 2011-04-29 12:12

Thanks Batalov.

As for the linear algebra runaway - I know I've seen similar things posted in the msieve forum. I haven't bothered to search for it, but I think the solution was either to restart starting from filtering or to sieve a bit more and then restart filtering. Redoing the entire job probably wasn't necessary - but I understand that was probably the fastest solution in terms of either "hands on" involvement or wall clock time (i.e. waiting for me to respond :))

Manually deleting files between jobs shouldn't be necessary either - but shouldn't hurt anything as long as you're careful about what you delete.


All times are UTC. The time now is 23:06.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.