mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Factoring (https://www.mersenneforum.org/forumdisplay.php?f=19)
-   -   Pascal's OPN roadblock files (https://www.mersenneforum.org/showthread.php?t=19066)

ThomRuley 2014-01-10 01:25

Pascal's OPN roadblock files
 
While I'm on the topic, is anybody else working on any of Paschal Ochem's 500 digit roadblocks? If not, then I'll be willing to go ahead and finish them off. If anybody else is interested, just let me know and we can divide and conquer.

RichD 2014-01-10 01:53

I am no longer working in the t500.txt file.

[LEFT]I have several queued up to be worked from "A list of the most difficult composites for the proof of N > 10^2000."
[CODE]9059 47 106105986521...
9157 47 157403461913...
7541 53 423021573075...
7549 53 447000133868...
9649 47 193299667109...
9677 47 220860967177...[/CODE]Only the first is currently active.
[/LEFT]

ThomRuley 2014-01-10 02:45

That's cool,Rich. Have you worked on anything in the t500 file since October?

RichD 2014-01-10 02:58

I did a few last week or so which were posted in the Gratuitous OPN factors thread. That's been it for nearly a year, I would guess.

Check FDB before working any number. I always post my results there first.

wombatman 2014-01-15 04:10

Is there any way to tell (or an assumption that can be made) how much ECM work has been done on a composite in the roadblock list? Don't want to waste a bunch of time doing ECM that's already been covered thoroughly.

Pascal Ochem 2014-01-15 23:31

I do not keep track of the ECM work done. We can safely assume that the 37755 composites in t1600 have not been ECMed to the 40 digit level. You can work on a bunch of them, e.g. between lines 6200 and 6700, and get some factors.

wombatman 2014-01-16 00:27

Thanks Pascal!

RichD 2014-01-16 16:38

Taking two more from "A list of the most difficult ..."

[CODE]10597 46 14403099607...
13009 42 62846017262...[/CODE]

RichD 2014-01-21 14:56

I'll queue these up next.
Should keep me busy for a while.

[CODE]1702903 22 121949012462...
22907 36 90965084897105...
499500151 18 3746633656...
23203 36 14441348835314...
23773 36 34597277246805...
23911 36 42612477375697...
12059023 22 61498815720...
24181 36 63839755395122...
24197 36 65378180537538...[/CODE]

RichD 2014-01-25 15:53

Grabbing a few more.

[CODE]12953 42 524310894478546...
1395523 28 1128850090956...
17257 42 896535992125814...
17737 42 283786321699231...
17807 42 334839168720218...
17989 42 513238300421923...
18149 42 744453124528549...[/CODE]

RichD 2014-02-04 21:12

In the coming weeks/months I will continue working on all the numbers below C200 in the list of most difficult numbers.

RichD 2014-02-05 21:42

[QUOTE=RichD;366144]In the coming weeks/months I will continue working on all the numbers below C200 in the list of most difficult numbers.[/QUOTE]
Wow, a lot more in the C190s. All the other "ten" ranges only had 4 or 5 composites. This range still has 21 remaining. I've generated all the SNFS polynomials. Most are Sigma(n^46) which is most suitable for degree-6.

PM me if you would like one or more to work on and I will send you the poly file. Each will take 5-6 days on a dedicated Core-i5. Just start factMsieve with the poly file and let it run. Let me know if anyone is interested, or you can pick one from the file itself. Just let me know which you choose so we don't duplicate work.

They range from SNFS-192 to SNFS-204 difficulty.

chris2be8 2014-02-11 16:35

Will the "most difficult" composites need ECM running against them before SNFS? I can do it either way, but needs to be allowed for.

Chris

RichD 2014-02-11 16:52

[QUOTE=chris2be8;366667]Will the "most difficult" composites need ECM running against them before SNFS? I can do it either way, but needs to be allowed for[/QUOTE]

In my last correspondence with Pascal, he said a decent number of curves have been run but he doesn't keep track. So far, I haven't had an "ECM miss".

I will stop before getting to a C200. Though a C198 may be an SNFS-204 difficulty.

ThomRuley 2014-02-12 01:10

Could I talk you into making a few polys from the t500 file? Mine are taking forever, but the actual NFS seems to be moving reasonably.

wombatman 2014-02-12 02:31

Thom, which ones from the t500 file are you looking at? I've cleared a few from there (and am working my way down the list).

ThomRuley 2014-02-12 14:51

I'll take these for now:

6115909044841454629 12
321381569252585866953628783126948367071187906389518216907098417372109834635071531 2
9235379541700294893241592533312191410548921 4
18855337777262169791 10
42223362249943191270479011547514088211349705097809873373648793992949125900561049993 2
33579221106357041 12

Let me know if anyone else is already working on these.

Thanks

Thom

wombatman 2014-02-12 16:15

I'll have to double check when I get home, but I believe I'm currently on:

53041017196666952234619819994982127672443220243418249007541741030212106804476059 2

No idea if anybody else is working on the file, but I'll take those off my batch list if you want them. Roughly how long will each of them take, do you think?

ThomRuley 2014-02-12 20:22

[QUOTE=wombatman;366758]I'll have to double check when I get home, but I believe I'm currently on:

53041017196666952234619819994982127672443220243418249007541741030212106804476059 2

No idea if anybody else is working on the file, but I'll take those off my batch list if you want them. Roughly how long will each of them take, do you think?[/QUOTE]

No problem, I'll just take that one off my list. This is why we need to communicate with each other about composites. Thanks

wombatman 2014-02-13 03:24

Well, I'm getting forgetful already, but I'm actually working on:

20241187^29-1
C151

It should be finished by Friday evening.

RichD 2014-02-13 04:25

I can't seem to locate the wiki page on degree halving but I will post two polys from the t500.txt file if anyone is interested.

[CODE]82932783659784864101 10

n: 14668426862825327643478642613752190334921815625710231530196536758731353111989670069823132151031683951702388087602570541194382925113356037064684273416829280380908509371047337
# 82932783659784864101^11-1, difficulty: 199.19, skewness: 1.00, alpha: 2.22
# cost: 1.3144e+17, est. time: 62.59 GHz days (not accurate yet!)
skew: 1.000
c5: 1
c4: 1
c3: -4
c2: -3
c1: 3
c0: 1
Y1: -82932783659784864101
Y0: 6877846605560679357661513062775038538202
m: 6638302919148143756548444023751343110512646774028322539454017523519024748726117291987933565320356252394470990730374728987385239556102040293564160261777386744194135007509561
type: snfs


1949751789915161 12

n: 178582760001354166553578106439433291393431180054089262671541219332034706214132826774027843833802708445291048587466523534490409654996494041174800154595618546224102815160715367622393
# 1949751789915161^13-1, difficulty: 183.48, skewness: 1.00, alpha: 3.10
# cost: 3.21513e+16, est. time: 15.31 GHz days (not accurate yet!)
skew: 1.000
c6: 1
c5: 1
c4: -5
c3: -4
c2: 6
c1: 3
c0: -1
Y1: -1949751789915161
Y0: 3801532042277374115783577655922
m: 178582759999806160677961385995543625559262607541818145383600310736688148336500964108316190094531539174959912589299553133029586112864519785072165516315129152499838566672770974464982
type: snfs[/CODE]

wombatman 2014-02-13 04:31

I'll tackle that second one (1949751789915161^13-1). How did you generate those SNFS polys?

Edit: I must be doing something wrong. When I use that generated poly, I get really huge sec/rel times (like 2 to 5 sec/rel). What am I forgetting? This is with the 64-bit GGNFS sievers.

wombatman 2014-02-13 06:07

And nevermind. I just had some parameters poorly chosen. Letting factmsieve.py choose fixed it up. I've got it running overnight on 6 threads of an i7 laptop. Let's see how far it gets.

RichD 2014-02-13 15:18

Alex (akruppa) has a nice write up on this technique in the forum. I'll see if I can find it later today or tomorrow. (Or if someone can find/post it sooner.)

In the meantime, I have polys for the other candidates from the t500.txt file. Notice the last one is a very large job.

[CODE]7176374761323733117 10

n: 4333672608561400392468435371252369910952175743673535954847528438755330426378888688344890271646226690658741722175084519329620055692562557835697404456447191
# 7176374761323733117^11-1, difficulty: 188.56, skewness: 1.00, alpha: 2.22
# cost: 5.10808e+16, est. time: 24.32 GHz days (not accurate yet!)
skew: 1.000
c5: 1
c4: 1
c3: -4
c2: -3
c1: 3
c0: 1
Y1: -7176374761323733117
Y0: 51500354714964267461382123205042535690
m: 4175449660431940680801462398720637573454718435410724619429434647286979789924092256110020212144851936807023998921154920859727195748106030811008026111736877
type: snfs


9460375336977361 12

n: 209975551157110378892620870103389947805798162009922409120709010153235407321896287264941665911761329863093997764496737809571093731283681478508421978895251769232338342844812999720816439
# 9460375336977361^13-1, difficulty: 191.71, skewness: 1.00, alpha: 3.10
# cost: 6.78261e+16, est. time: 32.30 GHz days (not accurate yet!)
skew: 1.000
c6: 1
c5: 1
c4: -5
c3: -4
c2: 6
c1: 3
c0: -1
Y1: -9460375336977361
Y0: 89498701516489516694491826524322
m: 209975496833072993031777094914692671678971021978566096082083728449322701929753343409485989657280964509270796562729338195171925885593050927551302334398083641662216471838881292896136028
type: snfs


9791642389174771 12

n: 364549208027099682752073528858182603803105960625100051927457251599935464619258371331871971280959126288643305544010164402276270051695066401730376227076800071989921006596834646301869351
# 9791642389174771^13-1, difficulty: 191.89, skewness: 1.00, alpha: 3.10
# cost: 6.89236e+16, est. time: 32.82 GHz days (not accurate yet!)
skew: 1.000
c6: 1
c5: 1
c4: -5
c3: -4
c2: 6
c1: 3
c0: -1
Y1: -9791642389174771
Y0: 95876260677484217584966382902442
m: 364549128701935172751610082076843178604422196867651351870618289633860590401979379495829190274537190161844642056498627741188970435031995718322882071308704278720781242353917087003516290
type: snfs


5081095716541357 12

n: 296134164124765620221178183504283937229641183267132524994276310350894537114341880478369348235566913164345528919663747110052527195669224456723544074276705430456760621790360950150531715966101
# 5081095716541357^13-1, difficulty: 188.47, skewness: 1.00, alpha: 3.10
# cost: 5.0678e+16, est. time: 24.13 GHz days (not accurate yet!)
skew: 1.000
c6: 1
c5: 1
c4: -5
c3: -4
c2: 6
c1: 3
c0: -1
Y1: -5081095716541357
Y0: 25817533680654926123346291401450
m: 296134164124765561939622266202903985335883085507537392204547512250277181744893366581529666027889910897764562852524284410286982854762316363995370278766687775412815047547255078553216770920158
type: snfs


5079304643216687969 12

n: 13227926040671587259710837611313899966904825456092440142374298131942234472692779854055273389506354271426710015812106476351183882963368073614494325473781371808578738391512293610688508221977940908013893290690463947815551
# 5079304643216687969^13-1, difficulty: 224.47, skewness: 1.00, alpha: 3.10
# cost: 1.10965e+18, est. time: 528.41 GHz days (not accurate yet!)
skew: 1.000
c6: 1
c5: 1
c4: -5
c3: -4
c2: 6
c1: 3
c0: -1
Y1: -5079304643216687969
Y0: 25799335658602605863094833809909344962
m: 13227926040613531291022054517611440014488409954166387411585758597925934073636883768118050976671464881310131855093827946354434775202072017634986132886020602419884754213498933879730975150215345857400071630616934067367700
type: snfs[/CODE]

RichD 2014-02-14 20:04

[QUOTE=RichD;366241]Wow, a lot more in the C190s. All the other "ten" ranges only had 4 or 5 composites. This range still has 21 remaining. I've generated all the SNFS polynomials.[/QUOTE]

I've noticed the numbers are removed from the file within hours of posting them to factordb.com, so I quit posting them to the Gratuitous thread. I thought I was down to 13 remaining (below C200) but after careful inspection, there are 24 remaining in the file. (C153-C199)

There appears to be an automated process to inquire FDB, remove what's been found, and generate more limbs to work. I guess this will be a never ending process. :smile:

P.S. This refers to "A [URL="http://www.lirmm.fr/%7Eochem/opn/mwrb2000.txt"]list[/URL] of the most difficult composites for the proof of N > 10^2000"
I don't know about the other (tXXX.txt) files.

RichD 2014-02-14 21:29

Powers of 11 or 13 to make a quintic or sextic
 
[QUOTE=RichD;366842]Alex (akruppa) has a nice write up on this technique in the forum.[/QUOTE]

Finally found it in this [URL="http://mersenneforum.org/showpost.php?p=54606&postcount=39"]post[/URL], section d.

Or if you are too lazy (like me) you can use his [URL="http://code.ohloh.net/file?fid=5fIIIhXZsMPwHuleD1ZAvwv_eWw&cid=RWzWt0M6m7w&s=&fp=393190&mp&projSelected=true#L0"]phi[/URL] program to generate the polynomials.

RichD 2014-02-14 21:58

Also Φ3(Φ3(n)) can use the polynomial x^4 + 2*x^3 + 4*x^2 + 3*x + 3, x = n.
(Likewise. Φ3(Φ3(n)/a) with small a.)
As stated [URL="http://mersenneforum.org/showpost.php?p=251364&postcount=507"]here[/URL] and [URL="http://mersenneforum.org/showpost.php?p=252678&postcount=519"]here[/URL], but these are hard to come by.

wombatman 2014-02-14 22:02

Thanks for digging those up. Now I have to figure out why I'm making phi crash. I'm trying to get the same poly you did for
[CODE]7176374761323733117 10[/CODE]

I used
[CODE] phi 11 7176374761323733117 (cofactor)[/CODE]

Am I on the right track?


Edit: And I got it working. Just have to put everything in properly. I'll read through the post and try to understand exactly what I'm doing with the phi program as well.

wombatman 2014-02-15 15:30

Grabbing 7176374761323733117 10

wombatman 2014-02-15 18:16

Also grabbing 9460375336977361 12

wombatman 2014-02-17 04:51

Running 9791642389174771 12

wombatman 2014-02-17 16:14

Grabbing 5081095716541357 12

wombatman 2014-02-19 19:33

Running 329473262366294657316493043899400715093065093 4

wombatman 2014-02-22 05:53

Running 297903607^23-1

wombatman 2014-02-28 04:58

Running 8970971^29-1

Having some trouble with the SNFS poly. FactorDB gives:
[CODE]n: 4288882318725178503864985939002570343870783101076222294692132636882637343988982346014447015553752612504342250262840696959794627098440959386503033787331436025332337203640782929696973892811770840659915530
m: 58102827030430867738060703578743851
deg: 5
skew: 0
type: snfs
c5: 6476760099930193480511831281
c0: -1
rlim: 18610400
alim: 18610400
lpbr: 29
lpba: 29
mfbr: 58
mfba: 58
rlambda: 2.7
alambda: 2.7[/CODE]

but GGNFS doesn't like the skew of 0.

Phi gives:

[CODE]n: 2580482901793422593005519906534751048235270635043096719781698476519957422481996978800942266020020022345900390998498545851996437063629641295043489044129593444543441973751813443723
# 8970971^29-1^29, difficulty: 208.59, skewness: 24.58, alpha: 0.00
# cost: 2.95737e+017, est. time: 140.83 GHz days (not accurate yet!)
skew: 24.579
c5: 1
c0: -8970971
Y1: -1
Y0: 521238776308011431982978168044507303749321
m: 521238776308011431982978168044507303749321
type: snfs[/CODE]

This one works but seems to give low relations. Any thoughts?

axn 2014-02-28 07:44

For the first one, the correct skew would be 0.00000274.

For the second one, the larger rational side coefficient implies that you should use larger rational side parameters.

wombatman 2014-02-28 14:12

Much obliged for the response.

On the 1st one, with the skew set appropriately, GGNFS still gives:

[CODE]gnfs-lasieve4I14e (with asm64): L1_BITS=15, SVN $Revision$
Please set a skewness[/CODE]

For the second, factsmieve sets the parameters as follows:

[CODE]rlim: 21300000
alim: 21300000
lpbr: 29
lpba: 29
[/CODE]

I'm still slowly learning both the theory and the practical applications here, so I'm not too good at determining whether these are set properly.

chris2be8 2014-02-28 16:31

Don't bother with the poly provided by factordb. c5: 6476760099930193480511831281 is ridiculous for snfs. In general the smaller the coefficients are the better it will sieve.

phi generated a reasonable poly. How may relations per special Q does it give? A rule of thumb (originally from Fivemack) is that if you are getting less that 2 relations per Q you should go to a larger siever or raise LPB[AR] and/or MFB[AR].

There was a "ggnfs pearls of wisdom" thread to collect such advice. It's worth reading.

Chris

wombatman 2014-02-28 16:52

I'd have to check to get an exact number, but it was something like 1.5 relations/Q or so, which seemed really low to me. I'll try and track that wisdom thread--I can definitely use any of that I can find!

henryzz 2014-02-28 19:54

The other problem with factordb polys is that it gives you the whole number to factor even if it has very small factors. msieve will find those factors and complain at you.

chris2be8 2014-03-01 16:43

[QUOTE=henryzz;367995]The other problem with factordb polys is that it gives you the whole number to factor even if it has very small factors. msieve will find those factors and complain at you.[/QUOTE]

I think using msieve compiled without ECM will stop it finding small factors when you don't want it to. But it's better to remove the small factors first. It might be useful for SNFS around 85 digits.

Chris

chris2be8 2014-03-01 16:53

[QUOTE=wombatman;367986]I'd have to check to get an exact number, but it was something like 1.5 relations/Q or so, which seemed really low to me. I'll try and track that wisdom thread--I can definitely use any of that I can find![/QUOTE]

In that case you would be better off raising LPBR and LPBA to 30 (and MFB[AR] to 60). That should double the yield, but it will nearly double the number of relations you need to collect. Raising just LPBA and LPBR would raise yield and relations needed a bit less.

In practice the job would still work with a yield around 1.5 per Q. It would take a little longer than with better parameters though (I've run a few like that by mistake).

Chris

henryzz 2014-03-01 21:21

[QUOTE=chris2be8;368052]I think using msieve compiled without ECM will stop it finding small factors when you don't want it to. But it's better to remove the small factors first. It might be useful for SNFS around 85 digits.

Chris[/QUOTE]

It will still do trial division I think.

wombatman 2014-03-03 20:57

I upped LPBR to 30 (totally arbitrary choice), and the yield went from ~1.5 to ~2 relations/Q, so that did indeed help. It still hasn't finished gathering relations yet, but we'll see how it does with the matrix building.

[QUOTE=chris2be8;368054]In that case you would be better off raising LPBR and LPBA to 30 (and MFB[AR] to 60). That should double the yield, but it will nearly double the number of relations you need to collect. Raising just LPBA and LPBR would raise yield and relations needed a bit less.

In practice the job would still work with a yield around 1.5 per Q. It would take a little longer than with better parameters though (I've run a few like that by mistake).

Chris[/QUOTE]

wombatman 2014-03-04 15:59

This is what I was worried about. It got to ~40 million relations and I end up with this:

[CODE]found 5162005 hash collisions in 40464281 relations
added 23 free relations
commencing duplicate removal, pass 2
found 4490571 duplicates and 35973733 unique relations
memory use: 197.2 MB
reading ideals above 720000
commencing singleton removal, initial pass
memory use: 1378.0 MB
reading all ideals from disk
memory use: 1282.0 MB
keeping 45221428 ideals with weight <= 200, target excess is 191371
commencing in-memory singleton removal
begin with 35973733 relations and 45221428 unique ideals
reduce to 4321 relations and 0 ideals in 17 passes[/CODE]

I assume I need to bump up the LPBA and add a sizable number of relations, yes?

LaurV 2014-03-04 16:07

If you reduced them to ashes, then it might be over-sieved? Try again without some thousand lines, it may help. I remember I have seen a discussion about this in the past. It never happened to me, however, so I am not sure.

chris2be8 2014-03-04 16:33

To judge by the last line:
[code] reduce to 4321 relations and 0 ideals in 17 passes [/code] You need about 30% more relations.

Chris

axn 2014-03-04 17:16

[QUOTE=wombatman;368306]This is what I was worried about. It got to ~40 million relations and I end up with this[/QUOTE]
With a 29/30 combination, you'd need about 55M unique relations. Right now you have 35M. Add another 20-25M relations before retrying for a matrix. Keep adding 5M relations until you succeed.

wombatman 2014-03-04 17:29

Thanks everybody. I'll report back with any new results.

wombatman 2014-03-05 16:29

Hasn't completed yet, but it looks much better with more relations:

[CODE]reduce to 13949767 relations and 14732971 ideals in 29 passes
max relations containing the same ideal: 96
filtering wants 1000000 more relations[/CODE]

Thanks for the help everybody.

swellman 2014-03-05 17:29

[url=http://homepage2.nifty.com/m_kamada/math/graphs.htm]This site[/url] has some good info on parameter selection.

There is a new thread the Math forum about estimating time to run GNFS that you might find interesting too.

Try out Yafu too.

wombatman 2014-03-06 18:09

Thanks for pointing those out. [STRIKE]A quick question on the Kamada graphs--for determining parameters, do you use the SNFS difficulty or the actual number of digits, at least for initial testing?[/STRIKE] [I]Edit: Nevermind, I was being dumb. It looks like it goes with the difficulty, or at least Factmsieve does.[/I]

Also, the number I was working on ended up needing between 52M and 55M (I set minrels to 55M and it worked--at 52M, it needed more).

wombatman 2014-03-14 13:43

Currently working on an SNFS 206. Now at over 65M relations (54.5M unique) and getting:

[CODE]begin with 14655138 relations and 16282704 unique ideals
reduce to 13932177 relations and 15555212 ideals in 24 passes
max relations containing the same ideal: 184[/CODE]

Does this seem right?

henryzz 2014-03-14 14:21

I assume that isn't the first pass of singleton removal. You are close but aren't quite there yet.

wombatman 2014-03-14 14:48

Good deal. That was from the "in-memory singleton removal" step, so yes, I believe you're correct.

henryzz 2014-03-14 16:47

[QUOTE=wombatman;368959]Good deal. That was from the "in-memory singleton removal" step, so yes, I believe you're correct.[/QUOTE]

I presume you don't have enough memory to run in-memory for all the relations then.

wombatman 2014-03-14 18:05

I have 16GB. Here's a more recent (and full) text:

[CODE]commencing relation filtering
estimated available RAM is 16305.1 MB
commencing duplicate removal, pass 1
read 10M relations
read 20M relations
read 30M relations
read 40M relations
read 50M relations
read 60M relations
found 11768832 hash collisions in 66546671 relations
commencing duplicate removal, pass 2
found 10711860 duplicates and 55834811 unique relations
memory use: 330.4 MB
reading ideals above 720000
commencing singleton removal, initial pass
memory use: 1506.0 MB
reading all ideals from disk
memory use: 2020.6 MB
keeping 64682070 ideals with weight <= 200, target excess is 315009
commencing in-memory singleton removal
begin with 55834811 relations and 64682070 unique ideals
reduce to 15017164 relations and 16253912 ideals in 29 passes
max relations containing the same ideal: 83
filtering wants 1000000 more relations
elapsed time 00:15:19[/CODE]

VBCurtis 2014-03-15 00:17

[QUOTE=wombatman;368956]Currently working on an SNFS 206. Now at over 65M relations (54.5M unique) and getting:

[CODE]begin with 14655138 relations and 16282704 unique ideals
reduce to 13932177 relations and 15555212 ideals in 24 passes
max relations containing the same ideal: 184[/CODE]

Does this seem right?[/QUOTE]

I would run SNFS-206 as 29 bit project, which would need 40-44M relations. Did you run 30/30 bit? Or 29/30?

wombatman 2014-03-15 01:45

Started at 29/29, but relations per Q was something like 1.2, so I upped it to 29/30, which only got the ratio to ~1.4. I finally upped to 30/30, which got the relations to ~1.6-1.8.

axn 2014-03-15 05:47

[QUOTE=wombatman;368974]
[CODE]begin with 55834811 relations and 64682070 unique ideals
reduce to 15017164 relations and 16253912 ideals in 29 passes
max relations containing the same ideal: 83
filtering wants 1000000 more relations
elapsed time 00:15:19[/CODE][/QUOTE]

You probably need another 4m relations (+/-)

wombatman 2014-03-15 20:24

I hope that + isn't too much :wink:

[CODE]found 13656903 hash collisions in 73722283 relations
commencing duplicate removal, pass 2
found 12450896 duplicates and 61271387 unique relations
memory use: 362.4 MB
reading ideals above 720000
commencing singleton removal, initial pass
memory use: 1506.0 MB
reading all ideals from disk
memory use: 2219.1 MB
keeping 67635537 ideals with weight <= 200, target excess is 348691
commencing in-memory singleton removal
begin with 61271387 relations and 67635537 unique ideals
reduce to 21692607 relations and 21441590 ideals in 21 passes
max relations containing the same ideal: 100
filtering wants 1000000 more relations
elapsed time 00:17:11[/CODE]

wombatman 2014-03-15 23:31

And finally,

[CODE]found 13875110 hash collisions in 74533091 relations
commencing duplicate removal, pass 2
found 12652410 duplicates and 61880681 unique relations
memory use: 362.4 MB
reading ideals above 720000
commencing singleton removal, initial pass
memory use: 1506.0 MB
reading all ideals from disk
memory use: 2241.4 MB
keeping 67949229 ideals with weight <= 200, target excess is 352493
commencing in-memory singleton removal
begin with 61880681 relations and 67949229 unique ideals
reduce to 22429981 relations and 21984408 ideals in 21 passes
max relations containing the same ideal: 102
removing 409145 relations and 390804 ideals in 18341 cliques
commencing in-memory singleton removal
begin with 22020836 relations and 21984408 unique ideals
reduce to 22013981 relations and 21586738 ideals in 9 passes
max relations containing the same ideal: 100
removing 296550 relations and 278209 ideals in 18341 cliques
commencing in-memory singleton removal
begin with 21717431 relations and 21586738 unique ideals
reduce to 21713764 relations and 21304858 ideals in 8 passes
max relations containing the same ideal: 100
relations with 0 large ideals: 8154[/CODE]

Sheesh.

schickel 2014-03-15 23:39

Ooops....cross-posted, but for future reference:[QUOTE=wombatman;369032]I hope that + isn't too much :wink:[/quote]Keep an eye on these two numbers. The closer they get to convergence, the closer you are to being done.....[CODE]begin with [COLOR="Blue"]61271387[/COLOR] relations and [COLOR="blue"]67635537[/COLOR] unique ideals
reduce to [COLOR="blue"]21692607[/COLOR] relations and [COLOR="blue"]21441590[/COLOR] ideals in 21 passes
max relations containing the same ideal: 100
filtering wants 1000000 more relations
elapsed time 00:17:11[/CODE]If you compare this run to the last run you posted, you can see how the number of relations is getting closer to the number of ideals. (Depending on how effectively the relations combine; the first singleton removal usually lets you know if it's going to succeed: a large surplus bodes well.)

wombatman 2014-03-15 23:49

It's funny you should mention that--I actually did go back to one of my previous runs and check the logs. I noticed that it moved on with the conditions you mention, so I've been checking every now and then since the # of relations actually pulled past the # of unique ideals. Crazy stuff.

swellman 2014-03-16 01:35

[QUOTE=wombatman;369003]Started at 29/29, but relations per Q was something like 1.2, so I upped it to 29/30, which only got the ratio to ~1.4. I finally upped to 30/30, which got the relations to ~1.6-1.8.[/QUOTE]

For a 29/29 bit job, Yafu requires 45M relations before it will even attempt filtering, and 91M for a 30/30 bit job. NFS@Home [url=http://escatter11.fullerton.edu/nfs/crunching.php]requires even more[/url].

Increasing bits doubles the speed and improves yield but requires double the number of relations.

You can also use a higher siever (e.g. 14e, 15e, etc) to increase yield but at a slower rate without changing the required number of relations. It's all a trade off.

wombatman 2014-03-16 02:51

Ah, I see. I knew from the last number I ran that increasing the bits would increase my relations needed, but I had it in my head that 30/30 would be more like 60-70M for some reason. Thanks for the NFS@Home--that's very helpful going forward.

VBCurtis 2014-03-16 07:22

[QUOTE=wombatman;369066]Ah, I see. I knew from the last number I ran that increasing the bits would increase my relations needed, but I had it in my head that 30/30 would be more like 60-70M for some reason. Thanks for the NFS@Home--that's very helpful going forward.[/QUOTE]

Note that NFS@Home sieves more because they have a shortage of matrix-solving power relative to sieving power. Their rels targets aren't optimal for individual projects- by oversieving, they can build smaller matrices.

I've run a bunch of 29 bit projects, GNFS 140-143 and SNFS 200-215. None have had more than 43.5M relations. I have added "target_density=75" to the factmsieve script within these rels numbers; I am not sure whether changing this to 80+ will lose more time in sieving than I gain in matrix solving.

I have not yet run a 30-bit project; that's coming this spring, so I'm closely following your experiences. I've had the impression that relations needed less-than-double as we move up; 28-bits take me ~22M relations, while 29 take 42-43. I've been hoping 30 would be 80-82M. Your run converging at 75M gives me hope that 80 will work at least some of the time!

wombatman 2014-03-16 18:46

I'm going to take a whack at 3^661-1 from the biggest roadblocks file.

chris2be8 2014-03-17 17:14

3^661-1 is a *big* job (about SNFS 316). Are you sure you have the resources to do it? (It probably needs NFS@HOME (or equivalent) for sieving and a cluster for LA.)

Sorry to be discouraging, but I don't want you to waste your time on something you will never be able to finish.

Chris

wombatman 2014-03-17 17:33

Heh. Yeah, I was mostly planning on working with some ECM. I certainly don't want to lay sole claim to it or anything.

RichD 2014-04-06 05:39

[QUOTE=RichD;366241]PM me if you would like one or more to work on and I will send you the poly file..[/QUOTE]

[B]chris2be8[/B] and I have knocked out all but a couple below C210. My latest was a C212 but Chris did an enormous amount. I still have a few polys for some not-so-small jobs. :smile:

chris2be8 2014-04-06 15:49

What coefficients should be used to factor a number of the form (a^7-1)/(a-1) as an inverted degree 3 SNFS polynomial? There are quite a few 80-90 digit numbers like that from OPN in factordb and degree 6 is not a good choice.

Chris

R.D. Silverman 2014-04-06 16:13

[QUOTE=chris2be8;370431]What coefficients should be used to factor a number of the form (a^7-1)/(a-1) as an inverted degree 3 SNFS polynomial? There are quite a few 80-90 digit numbers like that from OPN in factordb and degree 6 is not a good choice.

Chris[/QUOTE]

Only a complete moron would factor 80-digit composites with NFS.

jasonp 2014-04-06 18:24

By the time you get down to 80 digits, QS would only need a few minutes; just NFS postprocessing would take longer then that.

By the time you get to 90 digits, it's actually unclear whether QS or NFS with a degree-4 polynomial would finish faster. A degree 4 SNFS job would finish extremely quickly.

It's likely that multithreaded YAFU would run fastest of all at this size.

pinhodecarlos 2014-04-06 18:34

I prefer jasonp reply, it is more polite.

xilman 2014-04-06 18:35

[QUOTE=R.D. Silverman;370434]Only a complete moron would factor 80-digit composites with NFS.[/QUOTE]Or someone wanting to test their NSF software. QS would be faster than NFS (should be, anyway) at the c80 level but if all you want is to increase your confidence that your code works, factoring a c80 with NFS (special or general according to circumstances) will be rather faster than factoring, say, a c120. This approach is used by the CADO-NFS team for instance. Not all of them are complete morons.

I'm well aware that the person to whom you responded is not in that category, nonetheless it generally pays not to make sweeping generalizations.

chris2be8 2014-04-07 15:49

On the systems I'm using SNFS is faster than QS from around 80 digits upwards (the first thing I checked was which way was faster). Even the inverted sextics from x^13-1 are faster when done by SNFS.

The speed advantage of SNFS over QS will increase as I get nearer to 100 digits. I think the crossover from degree 3 to degree 6 will be around 95-100 digits (I've factored a few numbers as degree 3 at 100 digits, that was about equally fast).

I'm using factMsieve.pl with the 64 bit lattice siever for SNFS and msieve's QS. I've not been able to get yafu working, mainly due to lack of time. Is yafu's QS that much faster than msieve's?

Chris

schickel 2014-04-08 04:39

[QUOTE=chris2be8;370475]I'm using factMsieve.pl with the 64 bit lattice siever for SNFS and msieve's QS. I've not been able to get yafu working, mainly due to lack of time. Is yafu's QS that much faster than msieve's?

Chris[/QUOTE]The speed advantage with yafu is the use of multiple cores during sieving. I [URL="http://www.mersenneforum.org/showpost.php?p=273700&postcount=869"]tested[/URL] a 6-core run on a c101 which clocked in @ 2715 seconds.

henryzz 2014-04-08 12:21

[QUOTE=schickel;370515]The speed advantage with yafu is the use of multiple cores during sieving. I [URL="http://www.mersenneforum.org/showpost.php?p=273700&postcount=869"]tested[/URL] a 6-core run on a c101 which clocked in @ 2715 seconds.[/QUOTE]
Although that's nice it isn't just that. I just ran a 75 digit number with both. yafu took 90 seconds and msieve 145 seconds. msieve takes about 1.6x the time at that size.
I don't know how that compares on a modern cpu. Haswell can do amazing things with less optimized code sometimes from what I have seen. I ran on an ancient Q6600.

prgamma10 2014-04-08 12:30

[QUOTE=henryzz;370541]Although that's nice it isn't just that. I just ran a 75 digit number with both. yafu took 90 seconds and msieve 145 seconds. msieve takes about 1.6x the time at that size.[/QUOTE]
Yafu on single core, or all cores?

henryzz 2014-04-08 12:52

[QUOTE=prgamma10;370542]Yafu on single core, or all cores?[/QUOTE]
[code]One core: 90 seconds : 90 cpusecs
Two cores: 51 seconds : 102 cpusecs
Three cores: 40 seconds : 120 cpusecs
Four cores: 31 seconds : 124 cpusecs[/code]

chris2be8 2014-04-08 15:41

Thanks for that, I'll try to get yafu working when I've some free time.

But a factor of 1.6 or so will not make QS beat SNFS at around 90 digits. So can anyone answer my original question of what coefficients I need for inverted degree 3 polynomials? I've tried searching the web, but can't find anything useful.

Chris

ChristianB 2014-04-08 17:49

I don't know what OS you're preferring but I found this guide for [URL="http://www.starreloaders.com/edhall/AliWin/AliqueitLinstall.html"]setting up aliqueit[/URL] very helpful. It includes compiling yafu with ECM support for Linux only. You can skip the not needed software.

henryzz 2014-04-08 23:02

[QUOTE=chris2be8;370553]Thanks for that, I'll try to get yafu working when I've some free time.

But a factor of 1.6 or so will not make QS beat SNFS at around 90 digits. So can anyone answer my original question of what coefficients I need for inverted degree 3 polynomials? I've tried searching the web, but can't find anything useful.

Chris[/QUOTE]
Follow the procedure on [URL]http://www.mersennewiki.org/index.php/SNFS_Polynomial_Selection[/URL]
For a^7-1, I get f(x)=x^3+x^2-2x-1 and g(x)=a*x-(a^2+1)
I just factored 1000000000000013^7-1 using this poly. I got the skew from [URL]http://myfactors.mooo.com/[/URL]

chris2be8 2014-04-09 16:11

Thank you both. That should speed things up a bit.

Chris

chris2be8 2014-04-10 16:42

I've tested inverted degree 3 polynomials, it looks as if the crossover where they are faster than degree 6 is about 93 digits SNFS difficulty.

Chris

RichD 2014-07-04 01:54

I see the tXXX files have been updated (07/2014). I'll take some of the low hanging fruit by doing p^3-1 from the t800 and t1600 files since they will all need GNFS processing.

wblipp 2014-07-06 19:05

[QUOTE=RichD;377334]I see the tXXX files have been updated (07/2014)[/QUOTE]

The file of [URL="http://www.lirmm.fr/~ochem/opn/mwrb2000.txt"]Most Wanted numbers[/URL] has now been updated, too. These updates reflect the factors of 11^449-1 that Ryan recently found.

There is some low hanging fruit in this file - fifteen composite less that 130 digits, including 4 less than 120 digits.

wombatman 2014-07-07 03:04

I'll take care of 51902477001822224434084561591462190550751637520303260282420205611^3-1 (C113).

henryzz 2014-07-07 10:37

I'll do:
12689^31-1
12983^31-1
6443551^19-1

Where is the best place to submit the factors? factordb is obvious.

wombatman 2014-07-07 13:08

The C113 goes as:

[CODE]prp45 factor: 366568255804085964886794017085477381984845437
prp68 factor: 78488837076915802898630130683697264758475627375740937245671650953671[/CODE]

These factors were submitted to factordb.
I'll go ahead and do 9521^37-1 as well.

wombatman 2014-07-07 15:28

9521^37-1 factors as:

[CODE]prp55 factor: 1017054082461436207597525814853731112584003944706347667
prp64 factor: 6514207494590829465785181272050397169217463238223959748659682991[/CODE]

wblipp 2014-07-07 22:07

[QUOTE=henryzz;377563]Where is the best place to submit the factors? factordb is obvious.[/QUOTE]

Yes, please put factors into factordb. A note here would also be appreciated, but Pascal and I will eventually find them in factordb regardless.

wombatman 2014-07-08 02:05

10433^37-1 (C122) factors as:

[CODE]prp36 factor: 593084879941205692273619178651798119
prp86 factor: 36910831783107216026274670945509105761029021134410150467948480712434026779779317038203[/CODE]

Wick 2014-07-08 07:16

46805021^19-1=30499344864086786836698716549* 753595192350337178870896997*50563296385553932251868234756904563574052152569888257925320056216873565438160387663

henryzz 2014-07-08 08:23

[QUOTE=henryzz;377563]I'll do:
12689^31-1
12983^31-1
6443551^19-1

Where is the best place to submit the factors? factordb is obvious.[/QUOTE]
(12698^31-1)/12697
[code]3266329819 (p10)
651884005275268808717 (p21)
441200818335257316149585536149263 (p31)
1377930188659936550750212534578946747835445833506177646182567 (p61)[/code](12983^31-1)/949452447758
[code]230567152914311308472857708350237741380392599039203 (p51)
149401897941117507398326112335700688271385424902815387753712841559 (p66)[/code](6443551^19-1)/6443550
[code]68022201147911423362179526198129326534626683337786839 (p53)
5390192194706310024919484088580616973912574947727992318345058190936071 (p[FONT=Verdana]70)[/FONT][/code][FONT=Verdana]It looks like there are small factors still in there.
[SIZE=2]
I'll do next:
[/SIZE][/FONT][FONT=Verdana][SIZE=2]578633548521408029271601481505799^5-1
66477673^17-1
73136069^17-1

edit: turns out the first two were already in the factordb
[/SIZE][/FONT]

chris2be8 2014-07-08 16:07

@henryzz, you originally reserved 12689^31-1 but posted factors for 12698^31-1. The latter is not in the file (the base is not a prime number since it's even).

It looks as if the file now includes numbers with a lower weight than it used to. The smallest weight in the previous version was 16424, the smallest weight is now 5488.

@wblipp, how much ECM has been run against the numbers in the file?

I'll do:
38567^29-1
33941^31-1
40277^29-1
35401^31-1

I should have them all done by tomorrow.

Chris


All times are UTC. The time now is 15:40.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.