mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Aliquot Sequences (https://www.mersenneforum.org/forumdisplay.php?f=90)
-   -   Reserved for MF - Sequence 4788 (https://www.mersenneforum.org/showthread.php?t=11615)

RichD 2014-03-02 07:15

C158
 
1107 @ 11e7 - nothing.

RichD 2014-03-28 20:12

I sent a note to [B]debrouxl[/B] a couple weeks ago but he must be away.

I went ahead and posted a team sieve [URL="http://mersenneforum.org/showthread.php?t=19227"]thread[/URL] to keep things moving.

Batalov 2014-04-13 03:15

[CODE]GMP-ECM 7.0-dev [configured with GMP 6.0.0, --enable-asm-redc, --enable-gpu, --enable-assert, --enable-openmp] [ECM]
Input number is 34287129936917587345356146795813041353257817538185445350525573912534386065900174396694157805853284101661601293564239230423921585957629908897256542967 (149 digits)
Using B1=3000000, B2=13134672, sigma=3:1234751606-3:1234752373 (768 curves)
Computing 768 Step 1 took 26679ms of CPU time / [COLOR="DarkRed"]970513ms of GPU time[/COLOR]
********** Factor found in step 1: 297800272640956791407736751931185667
Found prime factor of 36 digits: 297800272640956791407736751931185667
Composite cofactor 115134649249485077644761201247390716628944485597564745943699149297894890143441578090968579719123054326250327771901 has 114 digits
[/CODE]

RichD 2014-04-13 15:18

C177 @ i5216
 
2^3 * 3 * ... * C177

-pm1 @ 3e9 - no factor

RichD 2014-04-13 19:42

C177
 
1000 @ 11e6 - no factor.

Batalov 2014-04-14 18:45

Don't run small curves (smaller than B1=110e6) on this one.
8000 x 110e6 and 3000 x 260e6 are done.

RichD 2014-04-24 19:41

C177
 
1200 @ 260e6 - no factor.

Passing through 250 @ 850e6.

unconnected 2014-04-28 10:52

Someone have broken c177 into p67*p100, now c182 on i5220. Another good news - we got rid of the 2^3*3 driver.

LaurV 2014-04-28 11:31

Beautiful! Who did it?
This will go down now, if no 31 on the horizon.

c10ck3r 2014-04-28 14:08

[QUOTE=unconnected;372172]Someone have broken c[B]177[/B] into p[B]67[/B]*p[B]100[/B], now c182 on i5220. Another good news - we got rid of the 2^3*3 driver.[/QUOTE]

That must have been quite a feat.

Batalov 2014-04-28 18:27

Could have been ryanp.
If this is the case, then most likely, c182 is already properly ECMd. I'll throw 3000 110e6 curves on it just in case.

RichD 2014-04-28 20:10

Correct. I asked Ryan if he had a little time to perform the GNFS on both A3408 and this one. I also said, in the tradition of the forum, he who performs the post-processing gets to run the first set of ECM curves.

He must have some down time so I am letting him play. :smile:

Batalov 2014-05-01 21:07

Ryan is on fire - he cracked the c182, too. ;-}

RichD 2014-05-23 01:45

C186 @ i5232
 
The 3 is gone at index 5232. GO RYAN!!

2^4 * 11 * ... * C186

Jayder 2014-06-28 11:20

We haven't received any updates in a while. Does anybody know what amount of ECM work has been done, if any?

RichD 2014-06-30 02:35

It is still being worked.

You must realize the c186 is a monumental task.

Jayder 2014-06-30 03:10

Absolutely I do. Was just looking for an update. When you don't hear any news, it's hard to tell if it is still chugging along silently or if it has been abandoned.

RichD 2014-10-16 18:12

C186 @ i5232
 
There has been a miscommunication who or what was being performed on this sequence. As it stands now it is available for work by forum members at i5232.

2^4 * 11 * ... * C186.

In the coming days I will start ECM work around t50 or t55 not knowing what has already been done.

RichD 2014-10-18 15:24

C186
 
2000 @ 43e6 - no factor
-pm1 @ 2e10 - no factor

passing through 1300 @ 11e7

RichD 2014-10-22 03:05

C186
 
passing through 7400 @ 11e7.

RichD 2014-10-24 23:03

C186
 
8100 @ 11e7 - no factor.

passing through 2000 @ 26e7

Dubslow 2014-10-25 09:35

I'll run some curves -- maybe around 425e6? (t60 is 260e6, t65 is 850e6)

VBCurtis 2014-10-25 17:05

[QUOTE=Dubslow;386045]I'll run some curves -- maybe around 425e6? (t60 is 260e6, t65 is 850e6)[/QUOTE]

That B1 is close to the changeover from k=6 using less memory vs k=2 using more memory.
If B2 is 4.7e12, that's k=6; try running a curve with flag -k 2 to force the higher-memory condition, and see if expected time for a t60 improves. Or you could try B1=400M, which I'm sure is k=6, and compare to B1=450M which should be k=2, and report which is best expected time to complete a t60.

I think the best choice depends on hardware architecture rather than ECM itself.

Dubslow 2014-10-25 18:22

[QUOTE=VBCurtis;386072]That B1 is close to the changeover from k=6 using less memory vs k=2 using more memory.
If B2 is 4.7e12, that's k=6; try running a curve with flag -k 2 to force the higher-memory condition, and see if expected time for a t60 improves. Or you could try B1=400M, which I'm sure is k=6, and compare to B1=450M which should be k=2, and report which is best expected time to complete a t60.

I think the best choice depends on hardware architecture rather than ECM itself.[/QUOTE]

Errr... what? You're over my head here. :smile:

I just thought I'd try something different. I'm also using yafu to drive 6 threads (simply due to familiarity and I'm not certain I have pyecm anywhere on my system (though it's quite possible)).

How does one check expected time to t<whatever> with `ecm`?

VBCurtis 2014-10-26 03:21

The -v flag will give you stats for whatever B1 you have chosen; specifically, expected curves to complete a level when you start the run, and expected time to complete the level when you finish the curve. You'll also see "k=2" or similar in the stats at the outset of a run while using -v. These stats allow you to see how many curves at 425M produces a t60.

So, my intent was for you to try "ecm -v 425e6 <inputfile.txt", then "ecm -k 2 -v 425e6 <inputfile.txt", and see which gives the shorter expected time to completion.

However, I just tried that B1, and it is just above the transition to a larger memory use, meaning it already uses k = 2 (that is, it takes two passes to complete stage 2 while using more memory, where B1=400M takes 6 passes using half the memory). So, never mind- you picked a good B1, where my suggestion is irrelevant and default settings are very likely to be fast.

-Curtis

RichD 2014-10-29 13:38

C186
 
passing through 5600 @ 26e7.

yoyo 2014-10-29 19:54

I start 10000 curves @26e7 now.

yoyo 2014-10-30 18:01

nearly 70% done so I schedule another 10000 curves @26e7.
I assume we need 42000 curves.

prgamma10 2014-10-31 23:16

[QUOTE=yoyo;386487]nearly 70% done so I schedule another 10000 curves @26e7.
I assume we need 42000 curves.[/QUOTE]
1/2*t60 should be enough.

Dubslow 2014-11-01 01:58

I did ~500 curves at 425e6. I'm not entirely sure on the count, it's probably no more than 10% accurate (and probably a bit on the high side). I've stopped since it's apparent my contribution is negligible.

RichD 2014-11-01 03:07

[QUOTE=prgamma10;386594]1/2*t60 should be enough.[/QUOTE]

But a full t55 hasn’t been performed. Does that make a difference?

I was thinking (guessing) a full t60 would be needed.

I haven’t worked with a number this large, so I don’t have a good feel.

Which bring up another set of comments.

If we progress into GGNFS, I think a good poly can be found with a couple GPUs in a few weeks.

Team sieving would take (at least) several months depending on the number of cores.

Is there a machine out there that can handle the LA phase in a reasonable amount of time? Something with at least 24 or more cores…

Someone with more experience care to comment?
(Help!)

passing through 6900 @ 26e7.
I did split off a core a while back and it has 200 @ 85e7.

VBCurtis 2014-11-01 04:54

Rich-
Half a t60 *is* 3t55. It doesn't matter what lower levels have or have not been done.
Or, the 260M & 425M curves done so far are more than a t55, so one has been done!

RichD 2014-11-01 14:15

Thanks Curtis. I really wasn't questioning [B]prgamma10[/B] numbers, but more if we have the power to tackle the C186 with just forum member's PCs.

I'll be wrapping my work up later today.

swellman 2014-11-01 15:24

This a big job, but I'd be willing to help with sieving if ECM strikes out.

I normally use Yafu, but I believe the .dat files are perfectly compatible with pure Msieve results.

Really, really hoping ECM hits.

wombatman 2014-11-01 16:26

NFS@Home might be willing to help with it.

VBCurtis 2014-11-01 16:30

I think a forum-organized NFS project is overdue, so I'm willing to help both with poly select and some sieving.

Batalov 2014-11-01 18:50

[QUOTE=VBCurtis;386650]I think a forum-organized NFS project is overdue, so I'm willing to help both with poly select and some sieving.[/QUOTE]
[URL]http://mersenneforum.org/showthread.php?t=19711[/URL] is already one forum-organized NFS project in progress.

Dubslow 2014-11-01 18:54

[QUOTE=swellman;386646]
I normally use Yafu, but I believe the .dat files are perfectly compatible with pure Msieve results.
[/QUOTE]

Neither Yafu nor Msieve are capable of sieving.*

They each delegate to external sievers, in the typical forum case the gnfs-lasieve collection. Since it's the same siever, the results format is of course the same.

As a bonus, Yafu isn't even capable of post-processing -- it just uses Msieve's code (which is why Msieve is a compile time dependency for NFS).**

[QUOTE=Batalov;386659][URL]http://mersenneforum.org/showthread.php?t=19711[/URL] is already one forum-organized NFS project in progress.[/QUOTE]

Not that it's getting anywhere :razz:



[SIZE="1"]
* Msieve technically has a line siever, but its use in place of an optimized lattice siever like gnfs-lasieve is morbidly inefficient and a waste of resources.

** Yafu's SIQS code has a copy-and-pasted older form of Msieve's post-processing as part of its own source, which is why Msieve isn't a dependency for without-NFS builds.[/SIZE]

VBCurtis 2014-11-01 18:55

Batalov-
Except that nobody has decided upon parameters for M991, nor begun sieving. I tried- I posted a baseline parameter set for feedback, but nobody with experience on that size posted improvements. I don't think any consensus was reached for M991 on whether 33-bit LPs were enough, and thus whether the Win64 siever is sufficient for M991.

Is there a 64-bit siever binary available with the 33-bit limit removed? I know NFS@Home uses one in a Boinc wrapper, but I have not found the freestanding binary.

Batalov 2014-11-01 19:09

Yes, I know. I am only trying to say that two concurrent projects will have a worse fate than one.

pinhodecarlos 2014-11-01 19:22

1 Attachment(s)
[QUOTE=VBCurtis;386661]

Is there a 64-bit siever binary available with the 33-bit limit removed? I know NFS@Home uses one in a Boinc wrapper, but I have not found the freestanding binary.[/QUOTE]

Do you want to try the file in the attachment? It is the 64-bit binary from NFS@Home. Please let me know if it works.

VBCurtis 2014-11-01 21:02

Carlos-
Thanks for trying. This binary produces a file "stderr" that has a series of error outputs, such as "can't open init data file - running in standalone mode"
Followed by "boinc initialized work files resolved, now working"
followed by "Cannot open input_data for input of nfs polynomials: no such file or directory"

I called the binary by renaming it to my usual 16e siever and running the python script for 2,991-. The script did work, producing the usual .fb file and job files, etc.

So, it appears this binary is the one modified to work with NFS@ home's BOINC wrapper.

pinhodecarlos 2014-11-01 21:07

I'm going to send an email to Greg to see if he can share the original one.

Edit: Just checked my email and I have the linux binaries.
[url]https://cld.pt/dl/download/bc833ab0-e354-464e-83d0-a1305e5402dc/lasieve5.tar.gz[/url]

RichD 2014-11-02 03:36

Final Counts
 
7186 @ 26e7 - no factor
222 @ 85e7 - no factor

I see yoyo is passing through 21K @ 26e7.

I'll put this on hold until the M991 project completes.

debrouxl 2014-11-02 11:53

A C186 GNFS task is far above the reach of 14e, so Greg is the one to convince for queuing to NFS@Home :smile:

RichD 2015-01-27 05:13

C186 @ i5232
 
I did a little preliminary work on poly selection and found the following:

I ran three intervals of leading coefficients. None of which produced the expected results. (i.e., not worth posting the poly.)

Baseline: expecting poly E from 3.55e-14 to > 4.09e-14

Leading coefficient & scores.
[CODE]1.2-1.3e6
skew 103308316.47, size 2.479e-18, alpha -7.147, combined = 2.977e-14 rroots = 5

2.7-2.8e6
skew 106652921.92, size 2.353e-18, alpha -8.691, combined = 2.825e-14 rroots = 3

3.6-3.8e6
skew 21384286.59, size 2.157e-18, alpha -7.678, combined = 2.677e-14 rroots = 3[/CODE]

It seems to get worse as the lead coefficient increases. I wonder if it would be better to search below 1M?

fivemack 2015-01-27 08:16

I think the anticipated-E in that range is a little high; the C187 we did on the forum four years ago (cofactor of 2^956+1) used an E=2.991e-14 polynomial successfully. Are you doing the polynomial selection on CPU or GPU, and do you happen to have the timings and the raw relation counts for the stage-1 pass on those ranges: which stage1_norm did you use?

RichD 2015-01-27 15:03

[QUOTE=fivemack;393695]I think the anticipated-E in that range is a little high; the C187 we did on the forum four years ago (cofactor of 2^956+1) used an E=2.991e-14 polynomial successfully. Are you doing the polynomial selection on CPU or GPU, and do you happen to have the timings and the raw relation counts for the stage-1 pass on those ranges: which stage1_norm did you use?[/QUOTE]

GPU

I did this a while back and recently retrieved the log file to my laptop. It appears I used (default?) of 1.2e28, for stage1_norm, on the early run but the later run was changed to 2.0e28.

fivemack 2015-01-27 16:00

That seems to me an extremely high stage1_norm value: I wonder whether you're ending up with an unreasonable number of things to filter down at stage two. I'm using stage1_norm=1e27 for my 114!+1 C187 at the moment, which gets a few million hits per range of a million C5 at a rate of a day or so GTX580-time per range.

RichD 2015-01-27 16:51

OK, that makes sense. I remember when using a GPU the stage1_norm should be changed by an order of magnitude — but I couldn’t remember which way.

I think the first range took nearly a day (on a GTX 460), the second range was quicker, and that’s why I doubled the size of the last range — back to almost a day.

When I get a chance I’ll run future ranges with 1e27. Thanks for your help.

RichD 2015-02-22 22:17

[QUOTE=RichD;393693]It seems to get worse as the lead coefficient increases. I wonder if it would be better to search below 1M?[/QUOTE]

Finally getting something I can work with. I searched in the 700-800K range and found this one.

[CODE]R0: -833190005691277922377915047229543598
R1: 178575398638879069
A0: -3818155834164528069112483611011796776579442825
A1: -46509162420627906168286164471544245931
A2: 259512104283233128379348283014
A3: -20240316266268900809140
A4: 436787942678052
A5: 789480
skew 111747742.36, size 3.373e-18, alpha -8.072, combined = 3.638e-14 rroots = 3[/CODE]

Next will be the 800s when I have another free time slot.

RichD 2015-02-27 05:12

Nothing better to report. The best in each range are listed.

[CODE]800-900K
skew 106635723.26, size 3.035e-18, alpha -7.789, combined = 3.424e-14 rroots = 3

600-700K
skew 261595249.51, size 2.554e-18, alpha -8.178, combined = 3.038e-14 rroots = 3[/CODE]

henryzz 2015-03-03 20:34

Is it worth posting this in the polynomial request thread? It would be nice to get this sieving at some point soon as the M991 job is tailing off.

VBCurtis 2015-03-04 03:50

I'll do some poly searching on it, so really it's just wombatman of the regulars who'd be likely to see it there; of course, may as well post there anyway. I'll start at 3.6 million.

I'll be interested to explore sieve timings for 15e/33 bit vs 16e/32 bit vs 16e/33 bit for this one. I wonder if 15e/34 bit might be usable- how difficult would it be to apply the 16e patches that opened up 34/35 bit sieving to the 15e siever? I suppose that's folly for a shared project since it doubles the data uploads, but I'm curious.

RichD 2015-03-04 06:02

Nothing better is found.

[CODE]900-1000K
skew 75780745.63, size 2.645e-18, alpha -6.832, combined = 3.103e-14 rroots = 3

500-600K
skew 337815414.30, size 2.791e-18, alpha -8.832, combined = 3.216e-14 rroots = 3

400-500K
skew 126028353.96, size 3.167e-18, alpha -7.133, combined = 3.516e-14 rroots = 5[/CODE]

VBCurtis 2015-03-04 07:29

Rich-
You should not discard that 3.51 poly from the 400-500k range. Score is only accurate as a predictor of sieve speed within 5-7%, so any poly within 10% of the score of your best could actually sieve best. I'm doing some test-sieving now with the best scoring poly you've found so far, but you should post (or test) the 3.51 also.

henryzz 2015-03-04 20:19

[QUOTE=VBCurtis;396971]I'll do some poly searching on it, so really it's just wombatman of the regulars who'd be likely to see it there; of course, may as well post there anyway. I'll start at 3.6 million.

I'll be interested to explore sieve timings for 15e/33 bit vs 16e/32 bit vs 16e/33 bit for this one. I wonder if 15e/34 bit might be usable- how difficult would it be to apply the 16e patches that opened up 34/35 bit sieving to the 15e siever? I suppose that's folly for a shared project since it doubles the data uploads, but I'm curious.[/QUOTE]
Assuming the binaries were compiled from the same source the 15/16 doesn't matter for >33 bit sieving. All that is needed as a modification to the source is commenting out the restriction.

Another thing I hope to try is doing some sieving at very low special q with the f variant of the siever. I want to test the duplication level. Relations can be found very quickly at small q as long as you can sieve below the factorbase bound.

VBCurtis 2015-03-04 22:16

A couple of core-hrs of test sieving shows that 16e/33 sieves about 10% slower than 15e/33, and 16e/34 is not faster than 16e/33 (about 70% more relations found per unit time, but 70% more needed and a larger matrix to solve).

15e/33 with a/rlim=314M (chosen by python script) and two large primes yields over 4, so 15e/32 and 16e/32 can be considered. I'll continue tinkering with test-sieving this evening.

The 16e/34 yield near 20 was a new experience for me.

The 16e yield is so high that a q-range of 70M or so would be enough. Does that mean alim/rlim of 300M is too big?

Thanks for the reply about 15e/34! I haven't compiled the sievers, but this may motivate me to try.

RichD 2015-03-05 03:35

[QUOTE=VBCurtis;396981]... you should post (or test) the 3.51 also.[/QUOTE]

OK
[CODE]R0: -943458323040788615580657516498371403
R1: 294791163397249211
A0: -3611166902472558851670819726384084615231028480
A1: 126541352673573186758634973864132850288
A2: 2209274649814415688972259393648
A3: -28394659605543653907266
A4: -169757533624755
A5: 424080
skew 126028353.96, size 3.167e-18, alpha -7.133, combined = 3.516e-14 rroots = 5[/CODE]

VBCurtis 2015-03-06 04:39

[QUOTE=RichD;397044]OK
[CODE]R0: -943458323040788615580657516498371403
R1: 294791163397249211
A0: -3611166902472558851670819726384084615231028480
A1: 126541352673573186758634973864132850288
A2: 2209274649814415688972259393648
A3: -28394659605543653907266
A4: -169757533624755
A5: 424080
skew 126028353.96, size 3.167e-18, alpha -7.133, combined = 3.516e-14 rroots = 5[/CODE][/QUOTE]

Using 15e/33bit, this sieves 4-6% faster than the higher-scoring poly you posted earlier, over an admittedly small sample of 3 1k intervals. Please keep posting any 3.5 or better polys! I haven't found anything scoring that well yet.

henryzz 2015-03-06 19:11

2 Attachment(s)
[QUOTE=VBCurtis;397031]A couple of core-hrs of test sieving shows that 16e/33 sieves about 10% slower than 15e/33, and 16e/34 is not faster than 16e/33 (about 70% more relations found per unit time, but 70% more needed and a larger matrix to solve).

15e/33 with a/rlim=314M (chosen by python script) and two large primes yields over 4, so 15e/32 and 16e/32 can be considered. I'll continue tinkering with test-sieving this evening.

The 16e/34 yield near 20 was a new experience for me.

The 16e yield is so high that a q-range of 70M or so would be enough. Does that mean alim/rlim of 300M is too big?

Thanks for the reply about 15e/34! I haven't compiled the sievers, but this may motivate me to try.[/QUOTE]

Assuming you run on Windows then these binaries should work I think.

How hard will the matrix be for this one? I would like to fiddle around and work out the duplicate rate after some sieving with the f variant. Will 4 GB be enough for the filtering? My core 2 will likely be too slow for actually doing the matrix.

Dubslow 2015-03-06 20:10

I could probably do filtering/matrix.

fivemack 2015-03-07 00:25

I suspect the filtering and the matrix job would fit in 16GB but not in anything significantly smaller.

Dubslow 2015-03-07 07:41

[QUOTE=fivemack;397204]I suspect the filtering and the matrix job would fit in 16GB but not in anything significantly smaller.[/QUOTE]

I could indeed do it then -- but not too much larger then this. (16 GiB happens to be what I have.)

VBCurtis 2015-03-07 19:13

[QUOTE=henryzz;397183]Assuming you run on Windows then these binaries should work I think.

How hard will the matrix be for this one? I would like to fiddle around and work out the duplicate rate after some sieving with the f variant. Will 4 GB be enough for the filtering? My core 2 will likely be too slow for actually doing the matrix.[/QUOTE]

Your 15e siever works fine to try 34bit lp. Thanks! I haven't played with f yet, but I appreciate you posting those, too.

henryzz 2015-03-07 22:08

[QUOTE=VBCurtis;397238]Your 15e siever works fine to try 34bit lp. Thanks! I haven't played with f yet, but I appreciate you posting those, too.[/QUOTE]
The f variant allows sieving below the factorbase bound. It can also sieve composite special q. The maximum number of factors in a special q can be controlled with -d. Sieving composite special q seems to slow down relation finding slightly as far as I can see so I use -d 1.
Sieving small q can provide very good yield. How much this lowers the secs/rel depends on the size of the number and the parameters chosen. The duplicate rate may be higher with the lower q.
I would suggest using the f variant is probably a good idea once you are much below the factor base bound.

I suspect that this number is a bit too big for me to experiment in a useful way with my limited resources. I can only use 2 out of 4 cores sieving due to memory usage with the factorbase bounds you chose. Reducing the bound to 100M doesn't harm speed much and yield isn't an issue as you noted earlier.
I noted that hardly any time is used on the quadratic sieve factoring for large primes.

VBCurtis 2015-03-08 01:06

300M for alim/rlim does seem too big. I'll use 180M for my next tests.
I tried 3 large primes with 33 bit and found sec/rel improved almost 20%, but yield stayed roughly constant. Sieving something like 30M to 200M should be enough to build a matrix, but perhaps too big a matrix for dubslow or I to solve on home equipment (I have a 6-core i7 with 16GB).

If 15f is more productive down really low, perhaps a sieve range like 5M to 160M would be faster.

RichD 2015-03-08 13:33

C186 @ i5232
 
Another one to play with/test from the 1.0-1.1M range.
[CODE]R0: -784171785411817668204933354637697383
R1: 226807616081850997
A0: 5659929256521862665448251658528575438240
A1: 8112093496713326197144222976345393744
A2: -2464554855231923962723017259338
A3: -93313161866957928016503
A4: 1133441776831664
A5: 1069068
skew 43803291.66, size 3.577e-18, alpha -7.815, combined = 3.690e-14 rroots = 5[/CODE]

VBCurtis 2015-03-11 03:58

Rich-
This new one, score 3.69, sieves just worse than the 3.64 find from last week. Both are 7-10% worse than the 3.51. 3 days on my GPU turned up a 3.05 and 3.01 as best, so I'll let you find the good ones and just keep test-sieving what you provide. One of the 3.6's "should" sieve 4-6% better than the 3.51 (if it sieves as well compared to its score as the 3.51 does). If we find such a thing, I think we could stop the poly search at that time.

I've now tested the f siever- the binary works! For 15e/33 bit, searching low Q values has a time per relation equal to 15e at its best q (right around half of alim, so 90M for my tests with the poly scored at 3.51e-14). I haven't done enough testing to find where the f and e have the same time per relation, but something like using f from 5M to 60M and e from 60M to 160M should provide enough relations. Yield is better with f, I guess due to the full factor base being used? I used the -d 1 flag as Henry suggested.

Sieve time estimate: 600M raw relations at 33bit @ 0.11 or 0.12 sec/rel -> 1.2 Megaminutes single-core -> ~7 months on quad-core desktop. I'll contribute up to a quad-core-month. If the 15f siever generates a higher duplicate rate at low Q, my estimates are low. But, we may find a 5% better poly still!

henryzz 2015-03-11 10:13

[QUOTE=VBCurtis;397413]Rich-
This new one, score 3.69, sieves just worse than the 3.64 find from last week. Both are 7-10% worse than the 3.51. 3 days on my GPU turned up a 3.05 and 3.01 as best, so I'll let you find the good ones and just keep test-sieving what you provide. One of the 3.6's "should" sieve 4-6% better than the 3.51 (if it sieves as well compared to its score as the 3.51 does). If we find such a thing, I think we could stop the poly search at that time.

I've now tested the f siever- the binary works! For 15e/33 bit, searching low Q values has a time per relation equal to 15e at its best q (right around half of alim, so 90M for my tests with the poly scored at 3.51e-14). I haven't done enough testing to find where the f and e have the same time per relation, but something like using f from 5M to 60M and e from 60M to 160M should provide enough relations. Yield is better with f, I guess due to the full factor base being used? I used the -d 1 flag as Henry suggested.

Sieve time estimate: 600M raw relations at 33bit @ 0.11 or 0.12 sec/rel -> 1.2 Megaminutes single-core -> ~7 months on quad-core desktop. I'll contribute up to a quad-core-month. If the 15f siever generates a higher duplicate rate at low Q, my estimates are low. But, we may find a 5% better poly still![/QUOTE]
I have been finding some numbers are better for the f siever than others. Different fb bounds might be better at very low q.

fivemack 2015-03-11 16:17

Since the sieving probably won't take more than 100 wall-clock days, it's not worth more than 5 more days of polynomial-searching to try to find an unlikely 5%-better polynomial.

(I will not have any spare cycles until the beginning of April, but at the beginning of April I will be able to put up to four modern quad-cores on the job)

ryanp 2015-03-30 14:14

I hope nobody has started sieving on this yet. I decided to tackle it:

[url]http://factordb.com/index.php?id=1100000000670257174[/url]

Mini-Geek 2015-03-30 14:38

[QUOTE=ryanp;398929]I hope nobody has started sieving on this yet. I decided to tackle it:

[url]http://factordb.com/index.php?id=1100000000670257174[/url][/QUOTE]

Check the link again, it is already factored. I assume someone submitted the factors between your post and now.

VictordeHolland 2015-03-30 15:41

[QUOTE=Mini-Geek;398930]it is already factored.[/QUOTE]
He meant it as a report that the number is now factored, not as a reservation ;).

rajula 2015-03-30 15:41

[QUOTE=Mini-Geek;398930]Check the link again, it is already factored. I assume someone submitted the factors between your post and now.[/QUOTE]

I suppose that someone was ryanp.

Mini-Geek 2015-03-30 16:59

[QUOTE=VictordeHolland;398932]He meant it as a report that the number is now factored, not as a reservation ;).[/QUOTE]

[QUOTE=rajula;398933]I suppose that someone was ryanp.[/QUOTE]

:redface: I read his post as saying just the opposite.

VBCurtis 2015-03-30 20:04

It picked up a 7, yuck. Thanks, Ryan!

Could you post the log, or at least the parameters you chose for this?

RichD 2015-03-30 22:28

C194 @ i5236
 
Yuck, yay!
It lost the 7.
The first decrease in years ??

Batalov 2015-03-31 00:51

It has ~t50 (here), and most likely Ryan already grilled it well, maybe a t55.
Best to run 3e8 curves on it now. Or 110e6 on smaller computers, if you wish.

Dubslow 2015-03-31 01:04

Was 5234 a lucky ECM hit or just a whole bunch of hardware to gnfs a C139 in only ~6 hours? I imagine that's well within Ryan's capabilities...?

Batalov 2015-03-31 01:10

[QUOTE=Dubslow;398981]Was 5234 a lucky ECM hit or just a whole bunch of hardware to gnfs a C139 in only ~6 hours? I imagine that's well within Ryan's capabilities...?[/QUOTE]
Well within.

EDIT: I've run about 7000 3e8 curves sometime earlier on the c194.

EDIT2: And 12,000 860e6 curves. No factor.

RichD 2015-04-16 20:30

c194 @ i5236
 
Some poly search but not very successful.
Search leading coeff. 400k-1500K.
The best so far.

[CODE]# expecting poly E from 1.22e-14 to > 1.41e-14
R0: -28802679190347950640622961076905085992
R1: 407302386245641081
A0: 133724758664977289938633386515649724519728371415
A1: 3087354419180727365131065771008983083315
A2: 8198257518534189800659473133083
A3: -91339106683419659560699
A4: -1417256419326482
A5: 683400
skew 211980820.13, size 5.400e-19, alpha -7.736, combined = 1.169e-14 rroots = 3[/CODE]

Others were:
1.069e-14
1.058e-14
1.055e-14
1.055e-14 (two different ones)

I will look further as time permits.

RichD 2015-04-23 17:17

I expanded my range from 100-1800K and only found one better score than the previously mentioned.
[CODE]R0: -41022789855819100446245236901416664699
R1: 1019636817060566507
A0: 384112992671117638711020950081661326068804342572
A1: 1976467057936727996025149999228597147176
A2: -78157658069982024722140360208107
A3: -295747804241547935485310
A4: 945198674876430
A5: 116604
skew 316580135.19, size 5.565e-19, alpha -7.880, combined = 1.188e-14 rroots = 5[/CODE]

More when time permits.

VBCurtis 2015-04-23 21:49

I'l test-sieve these two, as well as any others posted with score higher than 1.15e-14. I'll try a GPU-day or so on poly search myself, mostly to get a sense of how rare/nice your finds are.

I assume 16e/33 is best, but I'll try 15e/34 and 16e/34 for posterity.

RichD 2015-04-27 15:27

C194
 
I’ve searched 0-2.2M and nothing better has surfaced. Is it worthwhile to do a little more since we are not in the expected range?

I have no experience with numbers this big. I wouldn’t know what parameters would be best. I can assist on the polynomial selection and help with sieving. Some one else would have to coordinate this effort.

VBCurtis 2015-04-27 17:20

[QUOTE=RichD;401026]I’ve searched 0-2.2M and nothing better has surfaced. Is it worthwhile to do a little more since we are not in the expected range?

I have no experience with numbers this big. I wouldn’t know what parameters would be best. I can assist on the polynomial selection and help with sieving. Some one else would have to coordinate this effort.[/QUOTE]

Yes. The big guns who have previously done numbers this big usually search with coefficient 50M or more, to reduce the skew a little while not changing the [expected] score. However, the best-scoring polys seem to have lower coeffs, so I don't think you're wasting effort searching the low coeff space.
Rule of thumb for poly select is to spend a minimum of 3% of the expected project length on poly searching; for a C150, that's around a day, so for C194 that's more days than you and I care to spend (say, a GPU-year). So, we keep searching. Perhaps if we find a nice-enough polynomial, Ryan will do the heavy lifting again for us.
I'll have a GPU free in a day or two, and I'll start searching at 40M. I haven't trial-sieved your first two polys yet.

VBCurtis 2015-04-29 06:53

[QUOTE=VBCurtis;400747]I'l test-sieve these two, as well as any others posted with score higher than 1.15e-14. I'll try a GPU-day or so on poly search myself, mostly to get a sense of how rare/nice your finds are.

I assume 16e/33 is best, but I'll try 15e/34 and 16e/34 for posterity.[/QUOTE]

Posterity is useless, since I have no idea how many relations 34LP would need vs 33 for this job.

The first poly you posted, a5 = 683400, sieves 10% or so faster than the second poly a5 = 116604, after a single 1k test region at Q=50M. I'll test a few more regions to confirm. I picked alim=400M, without any good reason.

henryzz 2015-04-29 08:36

1.7-2x the number of relations depending on the size of number.

VBCurtis 2015-04-30 00:45

[QUOTE=henryzz;401219]1.7-2x the number of relations depending on the size of number.[/QUOTE]

Exactly. So, if test-sieving 34LP find relations 80% faster than 33, and we need somewhere between 70% more and 90% more, we can conclude.... nothing. This is exactly the case for my small test- 1.82 sec/rel at 33 bit vs 1.02 at 34 bit (on an i7 laptop, about 1/3rd the speed of most desktops).

Well, I suppose there is one useful conclusion: Since it's not obvious that 34 is faster, the doubling of data to manage trumps the unknown improvement in sieve effort.

VBCurtis 2015-05-05 22:06

4 GPU-days have turned up a 9.56 and a 9.40 poly, not competitive to Rich's finds. I'll keep looking.

I have covered 50 to 50.2M and 40 to 40.5M with stage 1 norm 2.1e28. This selection leaves 9 pieces for each coeff, meaning anyone else who has a go at the same ranges I chose has a 1/9 chance of duplicating work. Not that there's any incentive to search the same range...

RichD 2015-05-09 04:14

Nothing better from 0-3M.
Third best is 1.1.03e-14.
I'm using stage1_norm=1e28 at these low values.
Takes about 8 hours on GTX 460 per 100K leading coeff.
Continuing whenever I get a chance.

RichD 2015-05-27 04:00

Nothing better to 4.5M leading coefficient.
I should have more time on a GPU in the coming days.
I may reach 10M within a couple weeks.

RichD 2015-05-27 15:14

c194 @ i5236
 
Overnight, a new number two popped out.
Still not in the desired range.
[CODE]R0: -19590035041723157218547011054582449403
R1: 975334194934990309
A0: 24137781628053720561203589791084388125707564416
A1: 3729890152208266331360779006510859657032
A2: -64245986828706004956243043613430
A3: -130509958297049266684611
A4: 2223170352185530
A5: 4695300
skew 164421929.49, size 5.428e-19, alpha -7.477, combined = 1.176e-14 rroots = 5[/CODE]

RichD 2015-06-04 15:48

Passing through 7M (A5:) with nothing better than previously mentioned.

firejuggler 2015-06-04 17:05

rich, remind me how to run msieve, and i'll try to find something.
Since it's been one year and a half I haven't run a custom poly selection, I hav forgotten almost everything.
But i didn't forgot this thread
[url]http://mersenneforum.org/showthread.php?t=18368&highlight=msieve&page=42[/url]

RichD 2015-06-06 14:02

Thanks. According to my notes I perform 4 steps. There is further optimization using stage2-norm but I haven’t research it yet.

[CODE]1. msieve -g 0 -t 3 -mp1 “stage1-norm=2e28 20000000-21000000” -nps -v
2. sort -g -k 10 msieve.dat.ms | head -<num> > <tempfile>
3. replace msieve.dat.ms with the <tempfile>
4. msieve -g 0 -npr[/CODE]

Place the “n” number in the worktodo.ini file.
Step 1 has the norm picked for this number.
The X,Y is the range for the leading coefficient (20M-21M).
This step takes hours to run.
I can run a 100k interval on a GTX 460 in less than 8 hours.

In step 2, the -k 10 is for a 5 degree polynomial.
(-k 11 is used for a 6 degree.)
The <num> is the numéro du jour or the number of records to keep.
I usually pick anything from 30 to 50.
This step takes a few seconds.

Step 3, the new <tempfile> replaces the old msieve.dat.ms for the last step.

Step 4 takes around 10 minutes depending on the number of records saved from the sort.

VBCurtis has a good intro in this [url="http://www.mersenneforum.org/showpost.php?p=403501&postcount=107"]post[/url].

firejuggler 2015-06-06 15:01

ok, i'll try to find something, quick and dirty. Going on a trip from monday morning to the 21.. So i'll probably post something monday and will resume the search after my return (if you are still looking for a good poly)


All times are UTC. The time now is 11:31.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.