mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   NFS@Home (https://www.mersenneforum.org/forumdisplay.php?f=98)
-   -   Fast Breeding (guru management) (https://www.mersenneforum.org/showthread.php?t=20024)

VictordeHolland 2015-01-09 00:48

Fast Breeding (guru management)
 
These two Fibonacci numbers were suggested by Batalov a few months ago, they can still be sieved:

Fib(1405) C235
x^4+3x^2+1
x-L281
SNFS difficulty = 234.6

Fib(1415) C236
x^4+3x^2+1
x-L283
SNFS difficulty = 236.2

Both have survived enough ECM (17769 curves @110M by B.Kelly).

frmky 2015-01-09 01:58

Those are both on my list of potential candidates. I just skipped them for now since I haven't looked at the parameters for a quartic in a while. As quartics, I think they are more appropriate for 15e.

frmky 2015-01-10 04:18

Looking at the list of remaining Fib/Luc numbers in range of 15e, after the current batch we have 1 quintic (L1397), 16 quartics (F1405, F1415, and 14 Lucas numbers from L2645A to L2855A/B), and 7 GNFS (F2145, F2163, F1585, L2037, L1502, L1583, and L1541). I'll need to look up those quartic parameters again soon. :smile:

fivemack 2015-01-16 23:49

A bit worried
 
Does
[code]
C170_5748_1537 Aliquot GNFS(170) 32 20-120 M 42.17 % 3371 2534 0 0 ... Tom Womack
[/code]

(lots of blocks pending and none completed)

mean I've got the polynomial file somehow wrong? Could someone competent have a look at it?

pinhodecarlos 2015-01-17 00:35

I see a lot of blocks completed but I don't see on the server where the relations are saved. C170_5748_1537 folder is empty.

RichD 2015-01-17 14:30

[QUOTE=fivemack;392647](lots of blocks pending and none completed)[/QUOTE]

Same thing with lasieved.

Top entry sitting at 201M completed and 32M est. pending for nearly a day.
Number of pending dropped from over 500 to 300 with no change in the relations count.

debrouxl 2015-01-17 14:32

The update script probably went belly-up in a persistent way again. This occurs once in a while.

fivemack 2015-01-23 19:37

The statistics issue appears to have been resolved. I have added a few more Q to some of the 14e numbers, but GW_7_288 is ready for linear algebra now.

debrouxl 2015-01-27 09:54

Feeding the Beast (queue management)
 
The 14e queue is currently low, so I've queued a relatively easy (SNFS 233) near-repdigit number, though not started it yet.

Rich Dickerson recently suggested the 7103^61-1 and 7331^61-1 OP numbers, SNFS difficulty 235, on which at least 10K curves at B1=43e6 were performed. I'm hereby reserving both. It's been a while since we did OP numbers :smile:

We still have several GCW numbers suitable for 14e, don't we ?

fivemack 2015-01-27 10:17

I'm not anticipating queuing anything for a while: my polynomial-selection resources are all devoted to 114!+1 and it will be a week or so before I have a polynomial. It looks as if yoyo@home has thoroughly ECMed several GCW numbers, I'll leave the choice and enqueuing for those up to xilman.

I have asked yoyo@home to do substantial ECM on the three XYYXF C17x cofactors with highest SNFS:GNFS difficulty ratio, and will feed those (at least one to 14e, probably two, possibly three) once the ECM is done and I've found a polynomial.

fivemack 2015-01-29 00:31

I've put the two mentioned OPN numbers on; it would be good if xilman or someone could put up some more GCW, otherwise the queue will drain quite fast.

I had a look at GC_11_239, but it's too big for 14e.

xilman 2015-01-29 10:28

[QUOTE=fivemack;393882]I've put the two mentioned OPN numbers on; it would be good if xilman or someone could put up some more GCW, otherwise the queue will drain quite fast.

I had a look at GC_11_239, but it's too big for 14e.[/QUOTE]Yeah, I know, I'm falling behind. Rob Hooft is also asking for more work.

Although I now have admin access to the NFS@Home site I'm a bit wary of putting stuff there and screwing up ...

pinhodecarlos 2015-01-29 10:48

Paul,

Look on how the queue work was inserted on lasievee application and do the same for lasieved, I suppose.

Carlos

fivemack 2015-01-29 11:18

It's not that easy to break nfs@home
 
I too was very worried about screwing things up, but it's not as terrifying as it looks. If I took a polynomial file that runs fine on your local gnfs-lasieve4Ixxe, and just copy-and-paste it into the box, it ran fine on the grid.

A non-obvious thing (and probably not relevant if you're launching GCW SNFS jobs) is that you have to put 'lss: 0' (abbreviation for 'lattice sieve side', I think) in the file to force algebraic-side sieving.

xilman 2015-01-29 13:34

[QUOTE=fivemack;393908]I too was very worried about screwing things up, but it's not as terrifying as it looks. If I took a polynomial file that runs fine on your local gnfs-lasieve4Ixxe, and just copy-and-paste it into the box, it ran fine on the grid.

A non-obvious thing (and probably not relevant if you're launching GCW SNFS jobs) is that you have to put 'lss: 0' (abbreviation for 'lattice sieve side', I think) in the file to force algebraic-side sieving.[/QUOTE]OK, I put Cullen(785).c234 polynomial file and a snippet from a factMsieve.pl job file into the management page and took a guess for what the other boxes should contain. The one most likely to be seriously wrong is the maximum special-q. My experience is that it should be around 80M but other numbers queued run to 150M or more.

Could you (or anyone else with management powers reading this) check that I've not screwed up too badly. If all is well I'll add another dozen or two candidates.

Paul

fivemack 2015-01-29 15:26

[QUOTE=xilman;393919]OK, I put Cullen(785).c234 polynomial file and a snippet from a factMsieve.pl job file into the management page and took a guess for what the other boxes should contain. The one most likely to be seriously wrong is the maximum special-q. My experience is that it should be around 80M but other numbers queued run to 150M or more.

Could you (or anyone else with management powers reading this) check that I've not screwed up too badly. If all is well I'll add another dozen or two candidates.

Paul[/QUOTE]

I checked it and submitted it to the cloud. I would recommend in future submitting quite a low maximum special-Q and seeing what the tool recommends as max-Q after a bit of sieving (it just picks a level that gives 220M or so relations). A trial run of 10k @ 70M gives yield about 1.0, so I suspect you are in fact doing exactly that.

frmky 2015-01-29 21:49

[QUOTE=xilman;393905]Yeah, I know, I'm falling behind. Rob Hooft is also asking for more work.

Although I now have admin access to the NFS@Home site I'm a bit wary of putting stuff there and screwing up ...[/QUOTE]
The project is a hungry beast! :smile:

Don't worry too much about messing things up. It certainly won't be the first time, and it's not too difficult to delete a number and cancel all the workunits.

Edit: BTW, I just queued a batch of GCW's I had sitting here.

Edit 2: Sorry Tom, I just screwed up your threshold test! I was adjusting ranges and didn't see your note until after they were generated. I moved the relations thus far (all but about 100 wu's) to a separate file obviously named.

fivemack 2015-01-30 00:12

I've moved the threshold test straight to 'queue for post-processing', which I hope means it won't get any more relations queued; I'll let the queue drain and do an analysis. At the moment I'm really quite confident that 14e, 32-bit LP and 340M-ish relations is the way to go for C16x / low C17x.

xilman 2015-01-30 03:09

[QUOTE=frmky;393955]
Edit: BTW, I just queued a batch of GCW's I had sitting here.[/QUOTE]

Careful: C_2_785 appears to be in the queue twice. You've re-added the one I used as my first attempt. I was making a start on the batch already sent in because I knew those had been pre-tested by Rob.

frmky 2015-01-30 08:02

[QUOTE=xilman;393972][QUOTE=frmky;393955]
Edit: BTW, I just queued a batch of GCW's I had sitting here.[/QUOTE]

Careful: C_2_785 appears to be in the queue twice. You've re-added the one I used as my first attempt. I was making a start on the batch already sent in because I knew those had been pre-tested by Rob.[/QUOTE]
Deleted the one I added. Thanks.

wblipp 2015-02-02 02:39

Here are some numbers from OddPerfect that have had ECM to 2/9 the SNFS size and are not being worked on by anyone. Feel free to use them as stop gap queue stuffers when needed.

Too big? I don't see anything better than quartic
227^125-1 C227 x^4+x^3+x^2+x+1 is SNFS 236
241^125-1 C224 x^4+x^3+x^2+x+1 is SNFS 239

Reasonable?
4943^67-1 C212 4943x^6-1 is SNFS 248
4993^67-1 C245 4993x^6-1 is SNFS 248
139^133-1 C211 x^6+x^5+x^4+x^3+x^2+x+1 is SNFS 245
5297^67-1 C211 5297x^6-1 is SNFS 246
5347^67-1 C211 5347x^6-1 is SNFS 247

Too Small?
5443^59-1 C172 x^6-5443 and x^5-5443 are C225

chris2be8 2015-02-18 16:50

Will you want to factor 5443^59-1? If not I'll happily factor it (I need something to run SNFS against while waiting for ECM to T50 on 13513^53-1 to finish).

Chris

debrouxl 2015-02-18 17:20

Go ahead for 5443^59-1 :smile:

About the other numbers posted by William:
* the quartics are probably well out of reach for 14e (even quintics of those difficulties can be hard), 15e is a safer bet;
* the other sextics should be alright for 14e. I'll have to test sieve them, but this batch of 5 could arguably be food for clients in the upcoming challenge :smile:

wombatman 2015-02-18 19:42

If more numbers are needed for the upcoming challenge, what about pulling some of the larger ones from Kamada's Wanted Page ([url]http://stdkmd.com/nrr/wanted.htm[/url] )? There's a wide range of numbers available.

VictordeHolland 2015-02-18 23:21

[QUOTE=wombatman;395774]If more numbers are needed for the upcoming challenge, what about pulling some of the larger ones from Kamada's Wanted Page ([URL]http://stdkmd.com/nrr/wanted.htm[/URL] )? There's a wide range of numbers available.[/QUOTE]
Most of those can be done by individuals on a single PC in a couple of days to a few weeks. NFS@home is for numbers that are out of range of individuals without a clusters.

wombatman 2015-02-18 23:24

Towards the very bottom (Section 3) are some SNFS 240 to 300+. Those seem like they'd be pretty appropriate.

VictordeHolland 2015-02-18 23:35

[QUOTE=wombatman;395794]Towards the very bottom (Section 3) are some SNFS 240 to 300+. Those seem like they'd be pretty appropriate.[/QUOTE]
Ahaa, I didn't look down that far :surprised:.

wombatman 2015-02-18 23:48

[QUOTE=VictordeHolland;395795]Ahaa, I didn't look down that far :surprised:.[/QUOTE]

No worries! I was more afraid that I didn't have a proper idea of what constituted a good candidate for NFS@Home! For whoever gets to make the ultimate decision, the SNFS 230+ numbers start at around #300 on the 3rd section of numbers.

Batalov 2015-02-18 23:57

Note that these are only a thin slice of 'all' snfs-235-250+ (up to 300) jobs that his collection has.
These are the most "rewarding" so to say (factored to at most 3-4% of their log size), -- because the factors will have a chance to be larger if you were factoring a c250 with diff. 252 than if you were factoring a c180 with the same diff. 252. Of the latter kind, he has hundreds more (those can be found by wget'ting all the webpages and parsing).

The usual disclaimer applies of course, that they have no practical value. But of course factoring GCWs or Hom.Cunninghams or even straight Cunninghams has no practical value either. At least when one is factoring Cunninghams (or Hom.Cunninghams), one contributes to factoring larger similar numbers (i.e. what will be cyclotomically left of them), while when factoring near- and quasi-repdigits one is factoring [I]just[/I] them.

debrouxl 2015-02-19 10:18

I've just queued 139^133-1 to NFS@Home's 14e, after test sieving confirmed 31-bit LPs did the job.

Once in a while, I queue a near-repdigit number, nowadays preferably a relatively easy one which has received a large amount of ECM work (say, t55 for a SNFS difficulty 235- number).
Oh, that makes me think that I need to report the factors for the current reservation, we've had them for a little while :smile:

chris2be8 2015-02-19 16:53

[QUOTE=debrouxl;395758]Go ahead for 5443^59-1 :smile:

[/QUOTE]

Done.

Chris

wombatman 2015-02-25 03:34

If more numbers are needed, what about the Homogeneous Cunningham numbers? 204 ranging from SNFS ~220 to 250. Maybe a little on the low end, but it could be nice to knock off the larger ones.

Batalov 2015-02-25 03:40

[URL="http://www.mersenneforum.org/showthread.php?p=395791#post395791"]Yes[/URL], especially if worked starting from large towards small (leaving small for individuals).

wblipp 2015-02-26 04:12

[QUOTE=chris2be8;395832]
[QUOTE=debrouxl;395758]Go ahead for 5443^59-1 :smile:[/QUOTE]
Done.
[/QUOTE]

Not yet in factordb.

wblipp 2015-02-26 04:24

[QUOTE=debrouxl;395758]
* the quartics are probably well out of reach for 14e[/QUOTE]

How small must a quartic be? I've got a P58^5-1 that has had a t50, but it sounds like SNFS 231 is too big.

chris2be8 2015-02-26 16:44

[QUOTE=wblipp;396406]Not yet in factordb.[/QUOTE]

Sorry, bad wording on my part, I meant "done" in the sense of the deal is done, not that I'd factored it. I should have said "Taken". ETA about 3 days from now.

Chris

fivemack 2015-02-26 18:18

Picking a random P58 (3141592653589793238462643383279502884197169399375105821707)

14e, small prime 10^8, 32-bit large primes, rational side => yield ~1.0 rel/Q, time ~0.161s/rel

15e, small prime 10^8, 32-bit large primes, rational side => yield ~2.6 rel/Q, time ~0.123s/rel

(16e, small prime 10^8, 32-bit large primes, rational side => yield ~7.0 rel/Q, time ~0.136s/rel)

So: yes, 15e is the way to go. Tell me the value of p and I'll put it up on the 15e queue.

wblipp 2015-02-26 20:46

Thanks. Is the 15e queue adequately stocked?
[CODE]P58=4634428357056524094275270112238036268732742651439283929441[/CODE]
William

fivemack 2015-02-27 07:12

OK, that's on the queue. No idea when it will get done - I think the queue is long enough that even the Challenge won't burn through it, so it may take months before your number reaches the top of the queue and gets sieved in a few days.

wblipp 2015-03-03 04:16

How's the 14e queue looking? In the OPN thread RichD has announced finishing ECM for these five numbers, ranging from 237 to 240 digits:

7487^61-1
7673^61-1
8111^61-1
8447^61-1
8543^61-1

wblipp 2015-03-06 22:23

[QUOTE=fivemack;396529]it may take months before your number reaches the top of the queue[/QUOTE]

I realized that this numbers has ECM to 2/9 the SNFS size, but it should have more than this to compensate for the Quartic penalty. I figured a few months would give me ample time to quietly get the extra ECM and pull the number if a factor was found. But that plan has been foiled by the advancement of all queued numbers to sieving status in preparation for the upcoming Challenge. Please accept my apology and de-queue the number, replacing it with better prepared numbers.

William

pinhodecarlos 2015-03-13 19:01

Be aware that we need to feed more 14e and 15e tasks.

debrouxl 2015-03-13 21:08

I have just moved all queued 14e tasks to the sieving state.

pinhodecarlos 2015-03-13 21:32

Thank you but still not enough. I see more teams involved on this challenge than on the previous one.

VictordeHolland 2015-03-13 23:32

[QUOTE=wblipp;396867]How's the 14e queue looking? In the OPN thread RichD has announced finishing ECM for these five numbers, ranging from 237 to 240 digits:

7487^61-1
7673^61-1
8111^61-1
8447^61-1
8543^61-1[/QUOTE]
RichD completed ECM to 2/9 of their SNFS sizes, so they can still be added to the queue.

[edit]
Carlos, could you maybe try to convince the big 'boys'/teams to crunch mostly 16e and 15e and leave the 14e for the people with low ram computers?

wblipp 2015-03-14 02:28

[QUOTE=pinhodecarlos;397637]Thank you but still not enough. I see more teams involved on this challenge than on the previous one.[/QUOTE]
[QUOTE=wblipp;396867]
7487^61-1
7673^61-1
8111^61-1
8447^61-1
8543^61-1[/QUOTE]

I don't want to hog the queue - push anybody else in front of these. But if more are needed, these are ready to go:

8627^61-1
9109^61-1
3348577^37-1
8713^59-1
9029^59-1
9239^59-1
[strike]8447^61-1[/strike]
2617^71-1

I have more that are beyond t50 but not yet to 2/9.

pinhodecarlos 2015-03-14 07:24

Victor, I wish I had the power to force people to run what I wanted but I don't have. There are several things going one with this challenge, first the teams are fighting for Formula Boinc position, second they are running for the badges (14e, 15e and 16e), third they are fighting to win this challenge and last they want some WuProp hours for each individual application. Too many variables to control.

debrouxl 2015-03-14 09:28

Alright, reserving
7487^61-1
7673^61-1
8111^61-1
8447^61-1
8543^61-1
for NFS@Home's 14e. I'll credit Rich Dickerson for the ECM to 2/9 of SNFS difficulty. They'll be started when necessary.

BTW, who did the ECM work for
139_133_minus1
4943_67_minus1
4993_67_minus1
5297_67_minus1
5347_67_minus1
?


Paul Leyland provided a sizable list of SNFS difficulty 24x numbers on which Rob Hooft ran t55 ECM work, but I don't have the polys.

pinhodecarlos 2015-03-14 15:20

A few here: [url]http://stdkmd.com/nrr/wanted.htm[/url]

RichD 2015-03-14 16:11

[QUOTE=debrouxl;397678]Alright, reserving
7487^61-1
7673^61-1
8111^61-1
8447^61-1
8543^61-1
for NFS@Home's 14e. I'll credit Rich Dickerson for the ECM to 2/9 of SNFS difficulty. They'll be started when necessary.[/QUOTE]

Actually, yoyo@home took them to better than t50. I just did the last several thousand curves to reach 2/9 SNFS.

xilman 2015-03-14 17:59

1 Attachment(s)
[QUOTE=debrouxl;397678]
Paul Leyland provided a sizable list of SNFS difficulty 24x numbers on which Rob Hooft ran t55 ECM work, but I don't have the polys.[/QUOTE]I'll see what I can do. Generating the polys is trivial for me (a Perl script does that); what I'm still wary of doing is loading them into the server.

In case something goes badly wrong, the polynomial files are attached.

Paul

wblipp 2015-03-14 19:12

[QUOTE=debrouxl;397678]
BTW, who did the ECM work for
139_133_minus1
4943_67_minus1
4993_67_minus1
5297_67_minus1
5347_67_minus1[/QUOTE]
The work was mostly done by yoyo@home. I managed the grooming and feeding of the yoyo queue and tracking of extent for OddPerfect.org. Credit yoyo, I think.
[QUOTE=debrouxl;397678]
Paul Leyland provided a sizable list of SNFS difficulty 24x numbers on which Rob Hooft ran t55 ECM work, but I don't have the polys.[/QUOTE]
You can probably get help here.

xilman 2015-03-14 19:15

Added as far as GW_6_318. Got to go cook now so higher base examples will be added later.

pinhodecarlos 2015-03-14 19:19

[QUOTE=xilman;397710]Added as far as GW_6_318. Got to go cook now so higher base examples will be added later.[/QUOTE]

I'm waiting for ASDA. Didn't get the second confirmation email where they narrow the slot to 1 hour. My dinner will be some soup and fruit.
Anyway, I've emailed Lionel.

xilman 2015-03-14 20:59

20 projects now added, all in the "QUEUED for SIEVING" state. Someone please check that I've not screwed up too badly.

Dinner was sirloin steak, chips (fries for our North American readers), fried onions and a green salad. And freshly made English mustard, something which appears to be largely unavailable outside the UK. I thought you might to know all that, though it's barely relevant to the project in question.


Paul

pinhodecarlos 2015-03-14 21:26

[QUOTE=xilman;397719]20 projects now added, all in the "QUEUED for SIEVING" state. Someone please check that I've not screwed up too badly.

Dinner was sirloin steak, chips (fries for our North American readers), fried onions and a green salad. And freshly made English mustard, something which appears to be largely unavailable outside the UK. I thought you might to know all that, though it's barely relevant to the project in question.


Paul[/QUOTE]

From outside seems Ok.
The only healthy food I see in there is the green salad. You British don't know how to eat....oh well...

EDIT: Not sure if we have enough 15e wu's....just trying to read teams strategies...

debrouxl 2015-03-15 12:28

I've noticed that a "GC_8_264b" entry appeared, but 264*8^264+1 was already sieved by NFS@Home, post-processed by Ben Meekins, and it's already in FactorDB.

frmky 2015-03-15 18:16

[QUOTE=debrouxl;397763]I've noticed that a "GC_8_264b" entry appeared, but 264*8^264+1 was already sieved by NFS@Home, post-processed by Ben Meekins, and it's already in FactorDB.[/QUOTE]

Good catch! Now gone.

wblipp 2015-03-15 19:39

14e Almost Empty
 
The 14e queue now has plenty of numbers in the "queued" category, but it is ALMOST EMPTY in the "Now Sieving" category. If I understand correctly, somebody needs to take manual action within the next hour or two to promote some of these from "queued" to "now sieving."

frmky 2015-03-15 20:20

Another issue... 5297_67_minus1 actually has the poly for 5347_67_minus1, so I'm canceling those as well. We don't want to factor 5347_67_minus1 twice. :smile:

Edit: Whoever relists it, please add a b on the end of the name. Otherwise, the code will pull the old polynomial.

xilman 2015-03-15 20:24

[QUOTE=wblipp;397796]The 14e queue now has plenty of numbers in the "queued" category, but it is ALMOST EMPTY in the "Now Sieving" category. If I understand correctly, somebody needs to take manual action within the next hour or two to promote some of these from "queued" to "now sieving."[/QUOTE]Just added a few more. Looks like plenty to me but what do I know?

pinhodecarlos 2015-03-15 20:28

[QUOTE=xilman;397803]Just added a few more. Looks like plenty to me but what do I know?[/QUOTE]

Don't panic....more cores to come...:shark:

frmky 2015-03-15 20:31

Also, what's up with W_811? It's showing 0% pushed but 5000 pending. What's the history there?

jyb 2015-03-26 22:32

Forgive me if this is not the appropriate way to go about this (in which case I would appreciate if someone could acquaint me with the proper protocol). I have five composites to propose for the 14e queue. These are all Homogeneous Cunningham Numbers (HCN), with an SNFS difficulty of approximately 249. They have all had about 10000 ECM curves with B1 = 110e6.

The numbers are 7^293-2^293, 7^293-3^293, 7^293-6^293, 7^293+2^293 and 7^293+4^293. SNFS polynomials are given here:

[Code]
# 7^293-2^293
n: 230794479162374911687302426873820257732779444487576515134909693519341884488802484164788614233092736526644942326338845815313844026183214970808465053738686753637441995635614126555710139922867
skew: 1.232
c6: 2
c0: -7
Y1: -562949953421312
Y0: 256923577521058878088611477224235621321607



# 7^293-3^293
n: 17499580246118405874993172957411695279591166940037925712203477758854043580088036110459568010211040628766179691677104521101517020733336889431616766615294801446600593877558003253022255193871443471671368800364743471307111208191637302126634956597383
skew: 1.152
c6: 3
c0: -7
Y1: -239299329230617529590083
Y0: 256923577521058878088611477224235621321607



# 7^293-6^293
n: 11181916252973694378702587676957099344694241133064986763164582617460656925566371354102957133074436476641849917193888433318609673772327546259620783239703290182835875201186351190018004437
skew: 1.026
c6: 6
c0: -7
Y1: -134713546244127343440523266742756048896
Y0: 256923577521058878088611477224235621321607



# 7^293+2^293
n: 3037655597342145978693848383968702238029362433616790685423261087483365835487201678838065275089770928014751357465933388095402515340306355162806189391396131986686617675978548793427127152610078048943228565057122555233
# 7^293+2^293, difficulty: 248.76, skewness: 1.23, alpha: 0.00
# cost: 7.5527e+18, est. time: 3596.53 GHz days (not accurate yet!)
skew: 1.232
c6: 2
c0: 7
Y1: -562949953421312
Y0: 256923577521058878088611477224235621321607



# 7^293+4^293
n: 601093917276153811914098939276468094649006283190930551360274537612751749735322482173532584625324709159366517784933234531000069456442104185783512942257910069496918374716840734950624570191847592405857123534193329747236559639324139
# 7^293+4^293, difficulty: 249.06, skewness: 1.10, alpha: 0.00
# cost: 7.7288e+18, est. time: 3680.38 GHz days (not accurate yet!)
skew: 1.098
c6: 4
c0: 7
Y1: -316912650057057350374175801344
Y0: 256923577521058878088611477224235621321607
[/Code]

fivemack 2015-03-29 09:15

This is the right place to ask; however, there is something of a policy question here.

I thought the homogeneous Cunningham numbers were intended to be problems of a size that individuals could reasonably run, at which point it makes sense to leave the bigger ones until Moore's Law gets bigger computers into individuals' hands rather than to convert the high-hanging fruit of the HCN project into low-hanging fruit for the cloud.

Also the queue is now long enough that, if your resources amount to one quad-core, you'd still get the first answer quicker if you started sieving now rather than adding it to the queue now. I sieved 11^239-5^239 locally eighteen months ago, it took 18000 thread-hours on a fairly slow machine.

debrouxl 2015-03-30 13:44

I had pretty much reserved these numbers for NFS@Home's 14e before the March challenge on NFS@Home, in case clients depleted our queue very quickly.
In the end, clients didn't make such a big dent into the queue, but the numbers kept being reserved, and a decent amount of ECM work was run on them, especially by jyb. Therefore, I think that the least I can now do is to queue most of them to NFS@Home's 14e (though I'm currently swamped by a combination of day job and free time work related to what made me factor large integers in the first place, namely TI graphing calculators) :smile:

But you're of course right that the first WUs for the first Homogeneous Cunningham number are still a significant ways off reaching clients, unless the numbers are queued out of order.

R.D. Silverman 2015-03-30 16:33

[QUOTE=fivemack;398875]This is the right place to ask; however, there is something of a policy question here.

I thought the homogeneous Cunningham numbers were intended to be problems of a size that individuals could reasonably run,
[/QUOTE]

The larger ones are certainly out-of-range for my resources....

jyb 2015-03-30 16:56

[QUOTE=fivemack;398875]This is the right place to ask; however, there is something of a policy question here.[/QUOTE]
Indeed, I am unfamiliar with the policy practiced by NFS@Home, so I'm happy to learn whatever you can tell me about it.

[QUOTE=fivemack;398875]
I thought the homogeneous Cunningham numbers were intended to be problems of a size that individuals could reasonably run, at which point it makes sense to leave the bigger ones until Moore's Law gets bigger computers into individuals' hands rather than to convert the high-hanging fruit of the HCN project into low-hanging fruit for the cloud.
[/QUOTE]

Interesting point, though it does leave me with more questions. The problem with the passive voice ("...were intended to be...") is it doesn't reveal the subject. That is, who exactly is intending that for these numbers? I do concede that these numbers make a fine choice for that purpose, but could the same not be said of the generalized Cullen and Woodall numbers, of which many are being factored by NFS@Home? Is there any particular reason to single out the homogeneous Cunningham numbers for this special treatment?

Note that I am not trying to argue a point here. My intent is not to persuade, only to understand the thinking behind whatever policy is practiced. I don't really have a dog in this particular fight, other than that I made a bet with, uh, myself I guess, that we could get the number of HCN outstanding composites down below 100 by the end of the year. (And of course I'm not too worried about having to pay off, one way or the other.)


[QUOTE=fivemack;398875]
Also the queue is now long enough that, if your resources amount to one quad-core, you'd still get the first answer quicker if you started sieving now rather than adding it to the queue now. I sieved 11^239-5^239 locally eighteen months ago, it took 18000 thread-hours on a fairly slow machine.[/QUOTE]

Point taken. But every bit helps, even if it's delayed for a while.

R.D. Silverman 2015-03-30 17:20

[QUOTE=jyb;398937]Indeed, I am unfamiliar with the policy practiced by NFS@Home, so I'm happy to learn whatever you can tell me about it.



Interesting point, though it does leave me with more questions. The problem with the passive voice ("...were intended to be...") is it doesn't reveal the subject. That is, who exactly is intending that for these numbers?

[/QUOTE]

It is the purpose that I stated originally for doing these numbers. They gave a set of numbers that did not require
leading edge resources to do. Thus, they were a test platform for people new to this subject area as well as
a source of numbers that I could use to tinker with my code.

[QUOTE]
I do concede that these numbers make a fine choice for that purpose, but could the same not be said of the generalized Cullen and Woodall numbers, of which many are being factored by NFS@Home?
[/QUOTE]

Yes. The same can be said. But the C&W numbers were "invented" later. They were also more strongly promoted.


[QUOTE]
Is there any particular reason to single out the homogeneous Cunningham numbers for this special treatment?
[/QUOTE]

I am unsure as to the antecedent of the word 'this' in the previous sentence.

jyb 2015-03-30 18:08

[QUOTE=R.D. Silverman;398940]It is the purpose that I stated originally for doing these numbers. They gave a set of numbers that did not require
leading edge resources to do. Thus, they were a test platform for people new to this subject area as well as
a source of numbers that I could use to tinker with my code.[/QUOTE]
Yes, I'm aware that that was your original purpose. That was a number of years ago; has The Community as a whole (or whatever subset of it is involved in making decisions for NFS@Home) decided that it continues to be a reason which should guide our practices? Basically I'm just wondering whether anyone is thinking about such things when a whole bunch of GCW numbers get queued for factoring. Is there any good reason to prefer these to HCN's? (Other than the reason you already mentioned, Bob.)

[QUOTE=R.D. Silverman;398940]I am unsure as to the antecedent of the word 'this' in the previous sentence.[/QUOTE]
"this special treatment" referred to reserving the HCN's for individual factoring efforts. I.e. basically this:
[QUOTE=fivemack;398875]
leave the bigger ones until Moore's Law gets bigger computers into individuals' hands rather than to convert the high-hanging fruit of the HCN project into low-hanging fruit for the cloud.
[/QUOTE]

pinhodecarlos 2015-03-30 18:50

I also don't understand why HCN's are put aside. At the end the boinc sievers (aka clients) don't choose what number to sieve, only the application used (14e or 15e or 16e). The post-processors do choose what to run on msieve.

Probably Tom was referring to an old discussion about stop using the 14e siever and sieve harder integers with 15e siever that are not achievable for some users in terms of time and hardware, not sure.

fivemack 2015-03-30 20:05

I think it makes sense for there to be some set of numbers which are intended to be mainly targets for individual efforts (I used the passive voice only because I couldn't remember whether it was Paul or Richard who had intended it); since the HCN have been filling this role for a while, I think they should continue. For small GNFS there are of course umpteen numbers from the aliquot tables, but HCN is pretty good if what you want is SNFS numbers.

xilman 2015-03-30 20:14

[QUOTE=R.D. Silverman;398940]Yes. The same can be said. But the C&W numbers were "invented" later. They were also more strongly promoted.[/QUOTE]FWIW, the (G)CW numbers are my stamp collection. There's no particularly good reason one way or the other why NFS@Home should consider them more important than any others. They are submitted into the NFS@Home queue because I was asked to provide some targets to keep the relevant queue full at a time of impending shortfall.

Anyone who wishes to investigate the GCW candidates more deeply will notice that I've kept all the sub-230 digit SNFS examples back for those who would like relatively easy targets. There are still a goodly number of relatively easy GNFS numbers too. Anyone who wishes to run ECM is encourage to do so and I try to make it easy (a) by running an ECMNET server and (b) allocating work essentially in order of largest number of curves by smallest value of B1 which still "needs" doing.

Those capable of doing more work do so, by and large. Sam Wagstaff is targetting the 250-260 digits SNFS and their pre-testing. Rob Hooft is running many thousands of curves at B1>=43M for future SNFS efforts. yoyo@home has been doing heavy ECM work for a variety of reasons, presently to reduce the number of algebraic factors where at least two such remain. They recently completed one of the two from 8,1000+ for example.

While I'm grateful for any effort applied to my collection I'm entirely happy if some other project gets a higher profile for a while.

jyb 2015-03-30 21:28

[QUOTE=fivemack;398955]For small GNFS there are of course umpteen numbers from the aliquot tables, but HCN is pretty good if what you want is SNFS numbers.[/QUOTE]

[QUOTE=xilman;398957]There are still a goodly number of relatively easy GNFS numbers [for GCW candidates] too.[/QUOTE]

Ah, these are some good points. The current range of HCN's has NO candidates at all for GNFS. So to the extent that it's desirable for group sieving (a la NFS@Home) to have both GNFS and SNFS candidates available, then GCW does have an advantage there.



[QUOTE=xilman;398957]Anyone who wishes to run ECM is encourage to do so and I try to make it easy (a) by running an ECMNET server and (b) allocating work essentially in order of largest number of curves by smallest value of B1 which still "needs" doing.

Those capable of doing more work do so, by and large. Sam Wagstaff is targetting the 250-260 digits SNFS and their pre-testing. Rob Hooft is running many thousands of curves at B1>=43M for future SNFS efforts. yoyo@home has been doing heavy ECM work for a variety of reasons, presently to reduce the number of algebraic factors where at least two such remain.[/QUOTE]

So it sounds like perhaps the biggest reason that GCW's are preferred for NFS@Home is that there's already some good infrastructure in place for getting these numbers prepared to go in the queue. That sounds like a pretty good reason to me. Of course such infrastructure could be set up for (or retargeted to) the HCN's, but it's questionable whether there's any particular reason to do so.

One thing perhaps worth pointing out: the GCW's probably also make for a good pool of candidates for NFS@Home because there are so many of them, so there's a wide variety of composites: as mentioned above, there are both GNFS and SNFS candidates, and there's also a wide array of difficulties. Right now that isn't true of the HCN's. They are all better done by SNFS, and the difficulties are in a fairly narrow band; the low-hanging fruit isn't all that low for individuals, and the hardest ones aren't particularly interesting for group projects. But if we get the number of composites down just a bit more, then Paul will extend the tables, and the most obvious extensions will give a much more interesting variety of composites. There will be many GNFS candidates and SNFS difficulties will range from 140 to 304. This is the main reason I've been throwing hardware at the HCN's recently. I think they will become much more interesting, once extended.

In any case, thanks all for the information.

debrouxl 2015-03-31 11:48

[quote]I also don't understand why HCN's are put aside.[/quote]
As I wrote above, mainly because a) the challenge didn't sift through the 14e queue as quickly as we feared it would, and b) because I'm busy with other stuff :smile:

The 7 HCN numbers I chose are near the upper bound of the range (for both HCN and the 14e siever), and therefore least suitable for individuals.
They'll be queued to NFS@Home's 14e, all the more jyb posted the polynomials above - I don't even have to run phi myself anymore like I did for the first number :smile:

R.D. Silverman 2015-03-31 13:11

[QUOTE=debrouxl;399001] I don't even have to run phi myself anymore like I did for the first number :smile:[/QUOTE]

Excuse my ignorance. What is "phi" in this context?

fivemack 2015-03-31 13:15

'phi' is a tool for finding SNFS polynomials, written by wpolly and akruppa, which is aware of most of the standard algebraic factorisations. See [url]http://mersenneforum.org/showthread.php?t=8739[/url]

pinhodecarlos 2015-04-06 20:52

Please move some 14e jobs from queue to sieve. Also extend 16e range/create more wus. Thank you. Carlos.

fivemack 2015-04-07 07:16

At the moment we need more sieving like we need more possums braided into our hair: the linear algebra people are overwhelmed at 14e. I can get through about three 14e post-processing jobs each week on average, but that seems not to be enough to keep up.

pinhodecarlos 2015-04-07 08:34

From my side each time I try to download the dat file from server I get more than 16 hours, very but very slow download speeds. Not sure what it is happening here, must be a Virgin ISP issue.

xilman 2015-04-07 08:44

[QUOTE=fivemack;399532]At the moment we need more sieving like we need more possums braided into our hair: the linear algebra people are overwhelmed at 14e. I can get through about three 14e post-processing jobs each week on average, but that seems not to be enough to keep up.[/QUOTE]I'll see if I can help out. Seems only fair, given that others are doing the sieving for me.

Now to work out the procedure for getting the necessary stuff over here.

Looks like GW_4_405 may be fully sieved. What do I do to get hold of it?

Paul
P.S. Tom's taken GW_405. Substitute W_816 when the last few stragglers turn up.

VictordeHolland 2015-04-07 09:08

[QUOTE=pinhodecarlos;399536]From my side each time I try to download the dat file from server I get more than 16 hours, very but very slow download speeds. Not sure what it is happening here, must be a Virgin ISP issue.[/QUOTE]
I used to have the same issue, after I switched ISPs (from KPN to Ziggo) it got much better. I usually start the downloads before I go to bed (00:00 UTC+2) and it is ready when I check in the morning. I've seen downloadspeeds of up to 5 MByte/s, so I think the relation server is ok.

edit:
Paul, post-processing reservations and results now go in a separate thread: [URL="http://mersenneforum.org/showthread.php?t=20023"]Linear algebra reservations, progress and results[/URL]

wblipp 2015-04-07 21:35

[QUOTE=fivemack;399532]the linear algebra people are overwhelmed at 14e[/QUOTE]

BOINC sometimes changes the cost-benefit calculations. What happens if we try to partition the linear algebra and BOINCify it? We could, for example, use Gaussian Elimination chunked to work on blocks of size about 1 million. We can't parallelize the blocks, so we would get a lot of latency. We would use a lot of bandwidth - although much less than downloading relations because the distributed part doesn't need to know the corresponding primes. We would probably lose the efficiency of sparse matrix algorithms. Are there show-stopping disadvantages, or are the cumulative disadvantages too large a price, or is the problem the lack of an implementation?

xilman 2015-04-08 07:41

[QUOTE=wblipp;399573]...
We could, for example, use Gaussian Elimination chunked to work on blocks of size about 1 million.
...
We would probably lose the efficiency of sparse matrix algorithms. Are there show-stopping disadvantages, or are the cumulative disadvantages too large a price, or is the problem the lack of an implementation?[/QUOTE]The big problem with GE is that it causes fill-in. A 1M x 1M matrix is 1T bits, esentially all of which has to remain in memory at the same time. Do you have a system with at least 128GB? If so, how many others also have one?

The memory usage of GE was crippling back in the early days of MPQS. RSA-129 had a ~500K matrix and needed a MasPar to solve it. Clever versions of the algorithm, so-called Structured Gaussian Elimination, postponed fill-in but couldn't prevent it becoming important.

If you can implement parallel blocked GE please do so. It would be wise to start with, say, a 16K matrix split into 1K chunks and enforce a maximum memory usage of 10KB for each portion of the matrix. Use as much as you like for the code.


IMAO porting BW to a BOINC environment is likely to be more successful.

Paul

jasonp 2015-04-11 00:59

Block Weidemann would be the only practical algorithm for splitting the LA into chunks separated by high-latency communication links. However, BW has three phases, and the memory use of the middle phase goes up linearly with the number of cooperating systems even if its runtime is short. Everyone also needs the entire matrix to solve, as well.

RichD 2015-04-11 05:03

[QUOTE=jasonp;399826]Block Weidemann ... Everyone also needs the entire matrix to solve, as well.[/QUOTE]

Oh, I thought only the “mother box” needs to keep the entire matrix in memory at some point to solve.

Or, perhaps, the entire matrix needs to be accessed by the client, which in turn requires more bandwidth.

chris2be8 2015-04-11 15:59

Would the server have enough resources to build the matrix, then download the .mat file to a client who solves the matrix, then uploads the .dep file to the server which could then do the square roots? That should reduce the amount of data that has to be transferred, at the cost of CPU etc on the server.

The ultimate would be to have solving the smaller matrices a BOINC WU to hand out to users with a large enough system. But probably only to people who specifically ask for it and know what it involves.

Chris

VictordeHolland 2015-04-11 16:55

[QUOTE=chris2be8;399853]Would the server have enough resources to build the matrix, then download the .mat file to a client who solves the matrix, then uploads the .dep file to the server which could then do the square roots? That should reduce the amount of data that has to be transferred, at the cost of CPU etc on the server.

The ultimate would be to have solving the smaller matrices a BOINC WU to hand out to users with a large enough system. But probably only to people who specifically ask for it and know what it involves.

Chris[/QUOTE]
I like the idea, but I don't think it will save that much bandwidth. An average matrix is something like 7-9GB vs. 15-20GB of compressed relations. BOINC users still need to upload the 20GB of relations to the server in the first place.

VictordeHolland 2015-04-13 21:53

F1941 factors in factordb
 
Greg reserved F1941 for NFS@Home in January:
[code]
F1941 17769 @ 110M B.Kelly, G.Childers reserves for NFS@Home, 9 Jan
[/code]It shows as fully factored in the factordb: [URL]http://www.factordb.com/index.php?query=I%281941%29[/URL]
The P66 was added March 2nd.
I've informed Marin and asked him to add the factors to his tables.
Does anybody know who found the P66 and how? At the time it was reserved only a t55 was done, so it probably needed more ECM before SNFS. Or was it run with the 16e siever?

pinhodecarlos 2015-04-13 22:19

It was done with the 16e V5 siever.

RichD 2015-04-18 04:49

It would be nice if 8447_61_minus1 got a little touch up for more rels.
I seem to recall that 224M +/- is a nice goal for post-processors but that it also needs balancing by additional sieving requirements.

VictordeHolland 2015-04-18 09:03

[QUOTE=RichD;400353]It would be nice if 8447_61_minus1 got a little touch up for more rels.
I seem to recall that 224M +/- is a nice goal for post-processors but that it also needs balancing by additional sieving requirements.[/QUOTE]
You might want to try filtering it first, and look at the matrix size. I've done post-processing for numbers with around 210M raw relations and usually I could still filter them with target_density 100-110.

For instance C_2_785:
[url]http://mersenneforum.org/showpost.php?p=395979&postcount=77[/url]

fivemack 2015-04-20 15:37

Upcoming 15e GNFS jobs
 
Heads-up: in about ten days I should have a polynomial for 2340.742 (193-digit GNFS), and about a month after that 3270.698 (190-digit GNFS). These are probably on the edge of being practical by 15e - I'll see how long the 114!+1 linear algebra takes - but I will enqueue them for 15e because I don't have access to the 16e queue.

wblipp 2015-04-21 06:22

[QUOTE=wblipp;397195]I realized that this numbers has ECM to 2/9 the SNFS size, but it should have more than this to compensate for the Quartic penalty.[/QUOTE]

I've completed ECM to 2/9 of (30+SNFS size) to compensate for the extra difficulty required for a quartic on the P58^5-1.

RichD 2015-05-27 05:22

[QUOTE=debrouxl;397678]Alright, reserving
7487^61-1
7673^61-1
8111^61-1
8447^61-1
8543^61-1
for NFS@Home's 14e.[/QUOTE]

8543^61-1 appears to have been orphaned. :-)

wblipp 2015-05-27 14:56

[QUOTE=RichD;403047]8543^61-1 appears to have been orphaned. :-)[/QUOTE]

Also [URL="http://www.mersenneforum.org/showpost.php?p=397802&postcount=60"]5297^67-1[/URL]

debrouxl 2015-05-27 18:39

Alright, now both queued for sieving - thanks for pointing them out :smile:


All times are UTC. The time now is 20:25.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.