mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Aliquot Sequences (https://www.mersenneforum.org/forumdisplay.php?f=90)
-   -   Reserved for MF - Sequence 3408 (https://www.mersenneforum.org/showthread.php?t=18421)

EdH 2016-09-28 13:11

[QUOTE=schickel;443663]...
Probably best to annouce any large jobs you run before you do them so you or someone else does not waste significant time/resources.[/QUOTE]
Thanks. I'll throw some ECM (probably via YAFU) at the c171, unless/until I see something else here.

VBCurtis 2016-09-28 16:34

Let us know when you reach t45; I'll help out with some big-ish-bound curves in a week or so.

unconnected 2016-09-28 17:17

I already did t45 on C171.

yoyo 2016-09-28 17:28

If wanted I can direct some Minions to do some curves.

EdH 2016-09-28 18:21

I guess one of my machines did crack it:
[code]
***factors found***

P47 = 26318176137777902384669157593427010473990374299
*****
P124 = 4031021402702531602755343421673240452207392583418773356798754475389384041495840400832779434985343339179665261394547711603629
*****
[/code]c181 now...

edit: For those interested in more info:

ECM via YAFU va ali.pl while doing 2350 curves with B1=3M and B2=gmp-ecm default.

EdH 2016-09-28 19:30

Another one found:
[code]
prp19 = 7680393443200812877
[/code]now: c162

EdH 2016-09-30 02:09

Rough calculations across all my machines working on the c162 put me at about:
[code]
740 @ 11e3
2140 @ 5e4
4300 @ 25e4
9040 @ 1e6
23500 @ 3e6
6500 @ 11e6
800 @ 43e6
100 @ 11e7
[/code]Questions:

Were the grossly overdone (10x) smaller B1 values wasted?

Is there any reason to let the machines that are performing the YAFU ECM steps continue with the 11e6 work?

Is 7500 a good figure for 43e6?

Is 7000 a good figure for 11e7?

How do I tell what t value I'm current at?

What t value should I strive for?

Thanks!

LaurV 2016-09-30 02:37

[QUOTE=EdH;443869]
Is there any reason to let the machines that are performing the YAFU ECM steps continue with the 11e6 work?
[/QUOTE]
Not really... But you may get lucky...

You can put "v=1" (no quotes) in yafu.ini to see info about expected t values and all the "verbose" stuff. It helps a lot, but it slows yafu a bit (below 1%).

For the future, if you have more computers doing the same job is better to use "plan=custom" and supply a higher ecm percent for only one computer, but supply lower (or even none, you can use aliquiet with -e switch, or -noecm for yafu, etc) for the others. If you have different levels of ecm, some computers will finish much faster and will switch to poly. When the ECM finishes in the last one, you will have already one good poly. You can check which poly is best, and copy it in other machines, then resume with -e. It looks like a lot of work, but this way you save a lot of time with the poly selection, if you don't do that by GPU. If you do it by GPU, then it does not matter, that is much faster anyhow.

Also, if you have many old machines running 32-bit XP there, be aware that yafu's ecm is much slower on 32 bit machines, about half of the speed. You should try to run ecm in 64 bit OS.

EdH 2016-09-30 02:51

[QUOTE=LaurV;443875]Not really... But you may get lucky...

You can put "v=1" (no quotes) in yafu.ini to see info about expected t values and all the "verbose" stuff. It helps a lot, but it slows yafu a bit (below 1%).

For the future, if you have more computers doing the same job is better to use "plan=custom" and supply a higher ecm percent for only one computer, but supply lower (or even none, you can use aliquiet with -e switch, or -noecm for yafu, etc) for the others. If you have different levels of ecm, some computers will finish much faster and will switch to poly. When the ECM finishes in the last one, you will have already one good poly. You can check which poly is best, and copy it in other machines, then resume with -e. It looks like a lot of work, but this way you save a lot of time with the poly selection, if you don't do that by GPU. If you do it by GPU, then it does not matter, that is much faster anyhow.

Also, if you have many old machines running 32-bit XP there, be aware that yafu's ecm is much slower on 32 bit machines, about half of the speed. You should try to run ecm in 64 bit OS.[/QUOTE]
Thanks! I actually used YAFU because it was quick to engage. I plan to use a script that will assign ecm.py tasks to all the available machines in smaller chunks and then automatically step to the next B1 after the predetermined number of runs have been assigned. That's how I used to run Aliqueit a couple years ago: one machine ran Aliqueit and I had scripts that caught the Aliqueit calls and spread the ECM and gnfs tasks out to the machines that were available. I'm in the process of resurrecting that method for ecm.py and other things.

All my 32-bit machines are dormant. The current machines, although old, are at least 64-bit and multi-core. They are all running linux.

I do have two NVidia cards that worked way back when, but they are too ancient to run with anything current.

VBCurtis 2016-09-30 05:34

[QUOTE=EdH;443869]Were the grossly overdone (10x) smaller B1 values wasted?
How do I tell what t value I'm current at?

What t value should I strive for?

Thanks![/QUOTE]

An old rule of thumb for GNFS jobs was to ECM to a t-value of 0.31 * input size. For this C162, that's 50.22. More recent bayesian estimates for ECM effort have shown this to be a bit too much effort, so something in the vicinity of a t50 will suffice.

Invoking ecm -v with a B1 bound will show you how many curves at that bound are needed for a variety of t-levels. For instance, ECM 7 indicates that 24,000 curves at 3e6 are equivalent to a t45 (the level usually run by curves at 11e6), and 240,000 are a t50. At 11e6, 39500 curves are a t50. So, your 3e6 curves are about 10% of a t50, while your 11e6 curves are 16% of a t50, and your 43e6 curves are worth 9% of a t50 (8700 curves, again using ECM 7.0). As for "wasted", overkill on smaller curves is an ineffcient way to find largish factors, but there is definitely a chance to do so; so compared with a fastest-plan, the super-extra-3M curve count maybe wasted 20 or 30% of the computrons spent beyond the usual t40 number of curves.

Adding these up, you've done just over 1/3rd of a t50, so another 5000 or 6000 curves at 43e6 would be enough to justify proceeding to GNFS. Note that "enough" is both a rough and broad optimum; some folks feel strong regret if GNFS turns up a factor that ECM "could have" or even "should have" found; those people should likely do a bit more ECM to reduce incidence of that regret.

fivemack wrote a bayesian-analysis tool for ECM, taking already-run curves as input and outputting what curves should be run before GNFS. Alas, I can't find the thread presently.

EdH 2016-09-30 12:54

Thank you, VBCurtis. I think I now have some understanding, but let's see if I've caught one thing correctly. The, "Adding these up," is referring to the percentages? In which case, the just over 1/3 is because 10% + 16% +9% = 35%? And, the efficiency is determined by how long it takes to run different B1 curves? Are you also saying that I should run a particular B1 to the t40 level and then move up to the next B1 for best efficiency?

I would like to see that analysis tool, if you happen to find it. (Or, if someone else knows where it is.)

The last questions (for now) would be to all:

How do we know if someone is running ECM on the current composite and what the total t-level may be across all work? Or, does it matter?

henryzz 2016-09-30 13:18

Something that came out of that Bayesian approach was that it is best to run very little of the smaller curves and then finish off with larger curves. For upto a t50 something like 5-10% of curves at t20-50 and then finish up the t50 with curves at the t55 level would be better. I will find the thread next time I am on a PC rather than a tablet. The mainstream approach wastes a lot of time.

Edit:[url]http://mersenneforum.org/showthread.php?t=21420[/url]

VBCurtis 2016-09-30 15:42

[QUOTE=EdH;443901]Thank you, VBCurtis. I think I now have some understanding, but let's see if I've caught one thing correctly. The, "Adding these up," is referring to the percentages? In which case, the just over 1/3 is because 10% + 16% +9% = 35%? And, the efficiency is determined by how long it takes to run different B1 curves? Are you also saying that I should run a particular B1 to the t40 level and then move up to the next B1 for best efficiency?

I would like to see that analysis tool, if you happen to find it. (Or, if someone else knows where it is.)

The last questions (for now) would be to all:

How do we know if someone is running ECM on the current composite and what the total t-level may be across all work? Or, does it matter?[/QUOTE]

Nobody tracks how much ECM has been done on a number; that's why we announce plans/curves completed on the forum for publicly-interesting numbers.

Yes to adding the percentages. Usually, so few curves are done at a small level that the contribution of, say, B1=3M curves to a t50 is so small as to be ignored. In your case, you did so many that the percentage was worth noting.

My experiments toward minimizing time to find a factor of unknown size led me to run half the number of curves for a t-level before moving up to the next B1 size; I might run 500 at 1M before moving to 3M, instead of 900. Henry's experience with the bayesian tool suggests even less than that; both my heuristic and the bayesian tool fly in the face of RDS' published papers from the old days, which were the source of the traditional method.

To be clear, the traditional method is what you summarized: Complete curves at B1=3M sufficient for a t40 (according to help documentation or the -v flag of ECM), then move to 11M and run a number of curves equal to t45, etc.

I use "efficiency" to mean "best chance to find a factor per computron". I did a bunch of messing about with -v curve counts, k-values that determine B2 size, etc, and I think I gained a few percentage points of efficiency by using different B1 values than standard. The folks who actually know what they're doing like to remind us that the optimal settings are a very broad maximum, and it hardly matters what we choose so long as we don't use redundantly large numbers of curves with small B1.

Dubslow 2016-09-30 15:48

I wouldn't go so far as to call it "fly in the face" of the old paper. It's quite accurate. For the problem it states, it gives the optimal solution. The problem we're trying to solve isn't necessarily the same as the problem solved in the paper.

EdH 2016-09-30 16:01

Thanks for all the help. I'm studying the link, but must confess to not really understanding it yet.

My next dilemma, though, is that I have several instances of ali.pl running, that have all completed the 2350 curves at 3e6. But, since the perl based scripts don't provide all the intermediate info from YAFU, I can't tell anything about how many curves have been accomplished by any of those machines at 11e6.

It seems like a bit of work, but would it seem reasonable that I could come up with a fair estimate by canceling YAFU, running ECM for one curve and then dividing the times? Or, is there an easier method to determine the count?

EdH 2016-09-30 16:40

[QUOTE=EdH;443926]...
It seems like a bit of work, but would it seem reasonable that I could come up with a fair estimate by canceling YAFU, running ECM for one curve and then dividing the times?
...
[/QUOTE]Putting this to a test on one of my machines did not yield encouraging results...

henryzz 2016-09-30 18:32

Now that I have the script in front of me I can say that you need to do a further 4000 at 110e6 and 300-1600 at 850e6 depending upon the ratio between ecm speed and nfs speed on your pcs. Anything below 110e6 is inefficient now according to the script. Even with nothing done it only recommends 10@3e6, 100@11e6 and 100@43e6 below 110e6.

EdH 2016-09-30 22:34

[QUOTE=henryzz;443945]Now that I have the script in front of me I can say that you need to do a further 4000 at 110e6 and 300-1600 at 850e6 depending upon the ratio between ecm speed and nfs speed on your pcs. Anything below 110e6 is inefficient now according to the script. Even with nothing done it only recommends 10@3e6, 100@11e6 and 100@43e6 below 110e6.[/QUOTE]
Thanks! I have moved all my machines to 110e6. I currently have a little over 1350 curves done.

Is "11e7" improper or less readable?

I need to learn how to run/read the script. Off to do that...

henryzz 2016-09-30 22:57

[QUOTE=EdH;443954]Thanks! I have moved all my machines to 110e6. I currently have a little over 1350 curves done.

Is "11e7" improper or less readable?

I need to learn how to run/read the script. Off to do that...[/QUOTE]

The script uses 110e6 due to it only using e3, e6, e9 etc. I was just being consistent.
The script still has issues although it is possible to get answers that are fairly close to what is probably correct. Post any issues in that thread and I will try and help you get set up.

RichD 2016-10-02 02:46

Incase it goes to GNFS, here is a poly to consider for the C162.
[CODE]N: 273160738474933720738173971888648839004631254638334337328524154446360567876622758785071571728319901973105785355719274809547476812666409616142295140388748295147911
# expecting poly E from 1.03e-12 to > 1.18e-12
R0: -12041698219452787828222430533215
R1: 3878686267217897
A0: -801873829760148253152787721272602612832
A1: 2224566967366623497028146775700034
A2: -80044704608077672201472505
A3: -160143293651048845430
A4: 424445477332
A5: 1078896
skew 6380639.92, size 9.540e-16, alpha -7.595, combined = 1.102e-12 rroots = 5[/CODE]

EdH 2016-10-02 14:34

I've now got 5248@11e7 and 161@85e7

It's probably near time to move on...

henryzz 2016-10-02 14:46

[QUOTE=EdH;444053]I've now got 5248@11e7 and 161@85e7

It's probably near time to move on...[/QUOTE]

Yes probably. How long do you estimate for nfs?

EdH 2016-10-02 16:16

[QUOTE=henryzz;444056]Yes probably. How long do you estimate for nfs?[/QUOTE]
Not sure if you mean wall time, but I think it would take me about two weeks to complete. I might need to use my cluster project to meet the RAM requirements for LA.

I could be off a bunch all around, though.

Is anyone else interested?

EdH 2016-10-03 02:02

I started moving my machines over to sieving with RichD's poly. I'll see what might turn up with the others as I swap along slowly.

I guess that means I'll run with this one, too...

henryzz 2016-10-03 09:44

I would have offered to do postprocessing as I just upgraded to 8gb. Unfortunately my PC seems to have died(no it isn't the memory).

pinhodecarlos 2016-10-03 10:10

I'll do the postprocessing. My machine will be free within 2 days (4 cores, 16GB). Just PM me with the relations file location.

EdH 2016-10-03 14:29

[QUOTE=pinhodecarlos;444122]I'll do the postprocessing. My machine will be free within 2 days (4 cores, 16GB). Just PM me with the relations file location.[/QUOTE]Thanks. I expect it to be a few more days before I have enough relations, but I will check back here. I'll probably also see what the LA looks like on my cluster. It might not be too bad.

I'll try to keep everyone updated.

EdH 2016-10-07 16:36

It's a week and I'm a little less than half way (~35M unique) to the expected necessary relations. This is actually slower than I had anticipated. If my estimate is also off, this will still be quite a while...

pinhodecarlos 2016-10-07 17:13

[QUOTE=EdH;444475]It's a week and I'm a little less than half way (~35M unique) to the expected necessary relations. This is actually slower than I had anticipated. If my estimate is also off, this will still be quite a while...[/QUOTE]

Don't worry, I'll still post-process it. Just don't forget to send me by PM the link to download the relations file.

EdH 2016-10-08 02:23

[QUOTE=pinhodecarlos;444481]Don't worry, I'll still post-process it. Just don't forget to send me by PM the link to download the relations file.[/QUOTE]
OK, I'll start testing for a matrix at about 75M and let you know. I'm still going to try to find out if my cluster could handle it and what ETA it would give me. I would like to see how it compares to yours. If it says it would wok and it's not that far off, I'll let you decide which of us finishes it. Thanks.

EdH 2016-10-10 00:52

Well, I have enough relations for a matrix according to msieve, at a little under 50M. My trouble is that I can't get any file hosting service to accept anything of size. I'm currently trying to get sendspace to accept them, but at 150KB/s, it'll be tomorrow before I can get "all the little pieces" up on line.:sad: And, yes, I compressed them.

I used split to make smaller parts - cat can sew them back together. I .zipped each one and am trying to upload them to sendspace. I tried some other file hosting places, but nothing else worked.

On the bright side, if I let my cluster do it, it says ~59 hours. Might be sooner than it will take to transfer the relations and start another machine.

Thoughts?

After-thought: Maybe I should have been sending groups of raw relations all along, rather than waiting to compile a file of unique relations...

Sergei Chernykh 2016-10-10 07:29

[QUOTE=EdH;444659]Well, I have enough relations for a matrix according to msieve, at a little under 50M. My trouble is that I can't get any file hosting service to accept anything of size. I'm currently trying to get sendspace to accept them, but at 150KB/s, it'll be tomorrow before I can get "all the little pieces" up on line.:sad: And, yes, I compressed them.[/QUOTE]
What's the size of your files? drive.google.com offers 15 GB of free space for example.

EdH 2016-10-10 13:05

[QUOTE=Sergei Chernykh;444675]What's the size of your files? drive.google.com offers 15 GB of free space for example.[/QUOTE]
My total file is about 2.7G compressed. I also tried breaking it into 2, 4, 8 pieces. But, all the sites that said they would accept it, either wouldn't at all or had some error page show up. I finally went with sendspace, which I have used in the past, but they insist on uploads of less than 300M.

I'll check out drive.google.com.

Thanks...

EdH 2016-10-10 13:09

[QUOTE=pinhodecarlos;444481]Don't worry, I'll still post-process it. Just don't forget to send me by PM the link to download the relations file.[/QUOTE]
PM sent...

pinhodecarlos 2016-10-10 15:12

[QUOTE=EdH;444685]PM sent...[/QUOTE]

Hi Ed,

I'm sorry for there's some sort of daily download quote from sendspace and I can't download all files today. Finish the first 4 but I am blocked for download for the next 8 hours. At this download pace you better run yourself the post-processing.
This is so stupid this sendspace rule.

Carlos

EDIT: Trying to download the files with different IP's in different machines.

EdH 2016-10-10 15:22

[QUOTE=pinhodecarlos;444698]Hi Ed,

I'm sorry for there's some sort of daily download quote from sendspace and I can't download all files today. Finish the first 4 but I am blocked for download for the next 8 hours. At this download pace you better run yourself the post-processing.
This is so stupid this sendspace rule.

Carlos[/QUOTE]
Thanks Carlos,

I should have the factors on Wednesday morning from my cluster. I'll do some research and come up with a better method for next time. Even if I found a good place to put them, my upload speed isn't all that great, so it would still be quite a while before they would be available.

Take Care,
Ed

henryzz 2016-10-10 15:27

I have used wetransfer several times. There is a limit of 2GB but you can do multiple files. Dropbox, google drive, one drive etc are probably the most helpful options for this sort of thing.

pinhodecarlos 2016-10-10 15:31

Ed,

Are the fb and ini files on the zipped files?

EDIT: do you know how many unique relations do you have so I can guess which target density to use on msieve?
EDIT2: Missing downloads should be completed within 1 hour. Then I guess in 1h30 I will have an ETA for the LA.

EdH 2016-10-10 15:51

[QUOTE=pinhodecarlos;444704]Ed,

Are the fb and ini files on the zipped files?[/QUOTE]
I did not include those. This is the fb file:
[code]
N 273160738474933720738173971888648839004631254638334337328524154446360567876622758785071571728319901973105785355719274809547476812666409616142295140388748295147911
R0 -12041698219452787828222430533215
R1 3878686267217897
A0 -801873829760148253152787721272602612832
A1 2224566967366623497028146775700034
A2 -80044704608077672201472505
A3 -160143293651048845430
A4 424445477332
A5 1078896
SKEW 6380639.92
[/code]This is the number file:
[code]
N 273160738474933720738173971888648839004631254638334337328524154446360567876622758785071571728319901973105785355719274809547476812666409616142295140388748295147911
[/code]I used the following command lines:
[code]
./msieve -i number -s totalRels -l test.log -nf TeamPoly.fb -nc1
[/code][code]
mpirun -n 3 --hostfile ./hostsfile ../msieve/msieve -t 2 -i number -s totalRels -l test.log -nf TeamPoly.fb -nc2 3,1
[/code]

henryzz 2016-10-10 15:52

[url]https://en.wikipedia.org/wiki/Comparison_of_file_hosting_services[/url] compares a load of options.

EdH 2016-10-10 15:56

[QUOTE=henryzz;444702]I have used wetransfer several times. There is a limit of 2GB but you can do multiple files. Dropbox, google drive, one drive etc are probably the most helpful options for this sort of thing.[/QUOTE]
I will check wetransfer and the others over and see if anything will work. I might already have google drive. I tried filedropper, which is supposed to work up to 5G, but it wouldn't work down as low as 300MB. It did work on a really tiny file. I'll try to be better prepared next time.

henryzz 2016-10-10 15:57

[QUOTE=EdH;444708]I will check wetransfer and the others over and see if anything will work. I might already have google drive. I tried filedropper, which is supposed to work up to 5G, but it wouldn't work down as low as 300MB. It did work on a really tiny file. I'll try to be better prepared next time.[/QUOTE]

Let us know what you discover.

EdH 2016-10-10 15:57

[QUOTE=henryzz;444707][URL]https://en.wikipedia.org/wiki/Comparison_of_file_hosting_services[/URL] compares a load of options.[/QUOTE]
Thanks. This will be quite helpful. I was using a CNet page for info.

pinhodecarlos 2016-10-10 16:54

Ed,

Had a failure download on the c162relsag.zip file so trying once again.

Carlos

EdH 2016-10-10 18:12

[QUOTE=pinhodecarlos;444712]Ed,

Had a failure download on the c162relsag.zip file so trying once again.

Carlos[/QUOTE]
Carlos,

I just d/led it with no trouble. I can put it up again, but it will take about 30 minutes to upload...

I started it anyway.

Ed

pinhodecarlos 2016-10-10 18:12

Ed,

LA underway, see log file.

[CODE]
Mon Oct 10 18:26:13 2016
Mon Oct 10 18:26:13 2016
Mon Oct 10 18:26:13 2016 Msieve v. 1.53 (SVN unknown)
Mon Oct 10 18:26:13 2016 random seeds: f6e94dd0 b5f2c9db
Mon Oct 10 18:26:13 2016 factoring 273160738474933720738173971888648839004631254638334337328524154446360567876622758785071571728319901973105785355719274809547476812666409616142295140388748295147911 (162 digits)
Mon Oct 10 18:26:14 2016 searching for 15-digit factors
Mon Oct 10 18:26:14 2016 commencing number field sieve (162-digit input)
Mon Oct 10 18:26:14 2016 R0: -12041698219452787828222430533215
Mon Oct 10 18:26:14 2016 R1: 3878686267217897
Mon Oct 10 18:26:14 2016 A0: -801873829760148253152787721272602612832
Mon Oct 10 18:26:14 2016 A1: 2224566967366623497028146775700034
Mon Oct 10 18:26:14 2016 A2: -80044704608077672201472505
Mon Oct 10 18:26:14 2016 A3: -160143293651048845430
Mon Oct 10 18:26:14 2016 A4: 424445477332
Mon Oct 10 18:26:14 2016 A5: 1078896
Mon Oct 10 18:26:14 2016 skew 1.00, size 9.540e-16, alpha -7.595, combined = 7.338e-15 rroots = 5
Mon Oct 10 18:26:14 2016
Mon Oct 10 18:26:14 2016 commencing relation filtering
Mon Oct 10 18:26:14 2016 setting target matrix density to 120.0
Mon Oct 10 18:26:14 2016 estimated available RAM is 16335.3 MB
Mon Oct 10 18:26:14 2016 commencing duplicate removal, pass 1
Mon Oct 10 18:37:11 2016 found 2088132 hash collisions in 48751631 relations
Mon Oct 10 18:37:58 2016 added 121684 free relations
Mon Oct 10 18:37:58 2016 commencing duplicate removal, pass 2
Mon Oct 10 18:38:14 2016 found 0 duplicates and 48873315 unique relations
Mon Oct 10 18:38:14 2016 memory use: 130.6 MB
Mon Oct 10 18:38:14 2016 reading ideals above 720000
Mon Oct 10 18:38:14 2016 commencing singleton removal, initial pass
Mon Oct 10 18:47:27 2016 memory use: 1378.0 MB
Mon Oct 10 18:47:27 2016 reading all ideals from disk
Mon Oct 10 18:47:28 2016 memory use: 1822.2 MB
Mon Oct 10 18:47:32 2016 keeping 47655605 ideals with weight <= 200, target excess is 260510
Mon Oct 10 18:47:36 2016 commencing in-memory singleton removal
Mon Oct 10 18:47:40 2016 begin with 48873315 relations and 47655605 unique ideals
Mon Oct 10 18:48:15 2016 reduce to 31186925 relations and 28172761 ideals in 13 passes
Mon Oct 10 18:48:15 2016 max relations containing the same ideal: 156
Mon Oct 10 18:48:27 2016 removing 2545192 relations and 2145192 ideals in 400000 cliques
Mon Oct 10 18:48:28 2016 commencing in-memory singleton removal
Mon Oct 10 18:48:30 2016 begin with 28641733 relations and 28172761 unique ideals
Mon Oct 10 18:48:48 2016 reduce to 28489826 relations and 25873166 ideals in 8 passes
Mon Oct 10 18:48:48 2016 max relations containing the same ideal: 146
Mon Oct 10 18:48:59 2016 removing 1935358 relations and 1535358 ideals in 400000 cliques
Mon Oct 10 18:49:00 2016 commencing in-memory singleton removal
Mon Oct 10 18:49:02 2016 begin with 26554468 relations and 25873166 unique ideals
Mon Oct 10 18:49:18 2016 reduce to 26455734 relations and 24237744 ideals in 8 passes
Mon Oct 10 18:49:18 2016 max relations containing the same ideal: 140
Mon Oct 10 18:49:29 2016 removing 1747211 relations and 1347211 ideals in 400000 cliques
Mon Oct 10 18:49:29 2016 commencing in-memory singleton removal
Mon Oct 10 18:49:31 2016 begin with 24708523 relations and 24237744 unique ideals
Mon Oct 10 18:49:46 2016 reduce to 24621400 relations and 22802190 ideals in 8 passes
Mon Oct 10 18:49:46 2016 max relations containing the same ideal: 133
Mon Oct 10 18:49:56 2016 removing 1648731 relations and 1248731 ideals in 400000 cliques
Mon Oct 10 18:49:56 2016 commencing in-memory singleton removal
Mon Oct 10 18:49:58 2016 begin with 22972669 relations and 22802190 unique ideals
Mon Oct 10 18:50:10 2016 reduce to 22889764 relations and 21469396 ideals in 7 passes
Mon Oct 10 18:50:10 2016 max relations containing the same ideal: 130
Mon Oct 10 18:50:19 2016 removing 1587476 relations and 1187476 ideals in 400000 cliques
Mon Oct 10 18:50:20 2016 commencing in-memory singleton removal
Mon Oct 10 18:50:21 2016 begin with 21302288 relations and 21469396 unique ideals
Mon Oct 10 18:50:31 2016 reduce to 21220100 relations and 20198462 ideals in 6 passes
Mon Oct 10 18:50:31 2016 max relations containing the same ideal: 122
Mon Oct 10 18:50:39 2016 removing 1547413 relations and 1147413 ideals in 400000 cliques
Mon Oct 10 18:50:40 2016 commencing in-memory singleton removal
Mon Oct 10 18:50:41 2016 begin with 19672687 relations and 20198462 unique ideals
Mon Oct 10 18:50:50 2016 reduce to 19588091 relations and 18965124 ideals in 6 passes
Mon Oct 10 18:50:50 2016 max relations containing the same ideal: 119
Mon Oct 10 18:50:58 2016 removing 1270445 relations and 949670 ideals in 320775 cliques
Mon Oct 10 18:50:58 2016 commencing in-memory singleton removal
Mon Oct 10 18:50:59 2016 begin with 18317646 relations and 18965124 unique ideals
Mon Oct 10 18:51:08 2016 reduce to 18257629 relations and 17954569 ideals in 6 passes
Mon Oct 10 18:51:08 2016 max relations containing the same ideal: 110
Mon Oct 10 18:51:18 2016 relations with 0 large ideals: 877
Mon Oct 10 18:51:18 2016 relations with 1 large ideals: 1646
Mon Oct 10 18:51:18 2016 relations with 2 large ideals: 26567
Mon Oct 10 18:51:18 2016 relations with 3 large ideals: 217043
Mon Oct 10 18:51:18 2016 relations with 4 large ideals: 971365
Mon Oct 10 18:51:18 2016 relations with 5 large ideals: 2625533
Mon Oct 10 18:51:18 2016 relations with 6 large ideals: 4509870
Mon Oct 10 18:51:18 2016 relations with 7+ large ideals: 9904728
Mon Oct 10 18:51:18 2016 commencing 2-way merge
Mon Oct 10 18:51:28 2016 reduce to 12487624 relation sets and 12184564 unique ideals
Mon Oct 10 18:51:28 2016 commencing full merge
Mon Oct 10 18:56:05 2016 memory use: 1541.7 MB
Mon Oct 10 18:56:07 2016 found 5546445 cycles, need 5540764
Mon Oct 10 18:56:07 2016 weight of 5540764 cycles is about 665556044 (120.12/cycle)
Mon Oct 10 18:56:07 2016 distribution of cycle lengths:
Mon Oct 10 18:56:07 2016 1 relations: 212491
Mon Oct 10 18:56:07 2016 2 relations: 361486
Mon Oct 10 18:56:07 2016 3 relations: 427322
Mon Oct 10 18:56:07 2016 4 relations: 433162
Mon Oct 10 18:56:07 2016 5 relations: 435079
Mon Oct 10 18:56:07 2016 6 relations: 415495
Mon Oct 10 18:56:07 2016 7 relations: 398937
Mon Oct 10 18:56:07 2016 8 relations: 372738
Mon Oct 10 18:56:07 2016 9 relations: 346611
Mon Oct 10 18:56:07 2016 10+ relations: 2137443
Mon Oct 10 18:56:07 2016 heaviest cycle: 28 relations
Mon Oct 10 18:56:08 2016 commencing cycle optimization
Mon Oct 10 18:56:17 2016 start with 48476129 relations
Mon Oct 10 18:58:23 2016 pruned 2348166 relations
Mon Oct 10 18:58:23 2016 memory use: 1260.5 MB
Mon Oct 10 18:58:23 2016 distribution of cycle lengths:
Mon Oct 10 18:58:23 2016 1 relations: 212491
Mon Oct 10 18:58:23 2016 2 relations: 372349
Mon Oct 10 18:58:23 2016 3 relations: 449388
Mon Oct 10 18:58:23 2016 4 relations: 453865
Mon Oct 10 18:58:23 2016 5 relations: 458658
Mon Oct 10 18:58:23 2016 6 relations: 435859
Mon Oct 10 18:58:23 2016 7 relations: 418818
Mon Oct 10 18:58:23 2016 8 relations: 387873
Mon Oct 10 18:58:23 2016 9 relations: 360227
Mon Oct 10 18:58:23 2016 10+ relations: 1991236
Mon Oct 10 18:58:23 2016 heaviest cycle: 28 relations
Mon Oct 10 18:58:30 2016 RelProcTime: 1936
Mon Oct 10 18:58:30 2016
Mon Oct 10 18:58:30 2016 commencing linear algebra
Mon Oct 10 18:58:30 2016 read 5540764 cycles
Mon Oct 10 18:58:42 2016 cycles contain 18087427 unique relations
Mon Oct 10 19:02:16 2016 read 18087427 relations
Mon Oct 10 19:02:58 2016 using 20 quadratic characters above 4294917295
Mon Oct 10 19:04:14 2016 building initial matrix
Mon Oct 10 19:08:41 2016 memory use: 2562.0 MB
Mon Oct 10 19:08:44 2016 read 5540764 cycles
Mon Oct 10 19:08:47 2016 matrix is 5540586 x 5540764 (2610.4 MB) with weight 797510289 (143.94/col)
Mon Oct 10 19:08:47 2016 sparse part has weight 617814500 (111.50/col)
Mon Oct 10 19:10:01 2016 filtering completed in 2 passes
Mon Oct 10 19:10:03 2016 matrix is 5540551 x 5540729 (2610.4 MB) with weight 797508771 (143.94/col)
Mon Oct 10 19:10:03 2016 sparse part has weight 617814212 (111.50/col)
Mon Oct 10 19:10:24 2016 matrix starts at (0, 0)
Mon Oct 10 19:10:26 2016 matrix is 5540551 x 5540729 (2610.4 MB) with weight 797508771 (143.94/col)
Mon Oct 10 19:10:26 2016 sparse part has weight 617814212 (111.50/col)
Mon Oct 10 19:10:26 2016 saving the first 48 matrix rows for later
Mon Oct 10 19:10:27 2016 matrix includes 64 packed rows
Mon Oct 10 19:10:29 2016 matrix is 5540503 x 5540729 (2535.5 MB) with weight 680102432 (122.75/col)
Mon Oct 10 19:10:29 2016 sparse part has weight 609249244 (109.96/col)
Mon Oct 10 19:10:29 2016 using block size 8192 and superblock size 589824 for processor cache size 6144 kB
Mon Oct 10 19:11:08 2016 commencing Lanczos iteration (4 threads)
Mon Oct 10 19:11:08 2016 memory use: 2099.2 MB
Mon Oct 10 19:11:32 2016 linear algebra at 0.0%, ETA [B]23h20m[/B]
Mon Oct 10 19:11:40 2016 checkpointing every 240000 dimensions

[/CODE]

Carlos

EdH 2016-10-10 18:20

[QUOTE=pinhodecarlos;444721]Ed,

LA underway, see log file.

...

Carlos[/QUOTE]
Excellent! Carlos,

I stopped the duplicate upload.

Thanks,
Ed

Dubslow 2016-10-11 08:34

On the off-topic of large file transfer, has anyone ever experimented with torrent? It suffers from the primary non-third-party disadvantage that both parties involved have to be online at the same time, but what may-or-may-not (testing required!) make it better is that, after setup, it could be run in the background with trivial pause and resume capabilities so that either party may go offline at any time for any reason and when both are online again, the download will silently resume wherever it was. Twould be effectively fire and forget à la FTP, with no servers or secondary interaction required.

Tis so far only a thought experiment of mine (in relation to sending my brother ~150 GB of data I've had backed up for him for years, for which he only recently acquired the space to keep it himself -- [URL="https://what-if.xkcd.com/31/"]snail mail[/URL] is the only other alternative I see for now, and that's expensive), but it seems promising to me.

pinhodecarlos 2016-10-11 08:45

[QUOTE=Dubslow;444771]On the off-topic of large file transfer, has anyone ever experimented with torrent? [/QUOTE]

Yes I have, more than 500 GB, split in files of 7GB.
Also emule can be used for this end.

pinhodecarlos 2016-10-11 18:21

1 Attachment(s)
Ed,

Done.

[CODE]
Mon Oct 10 18:26:13 2016
Mon Oct 10 18:26:13 2016
Mon Oct 10 18:26:13 2016 Msieve v. 1.53 (SVN unknown)
Mon Oct 10 18:26:13 2016 random seeds: f6e94dd0 b5f2c9db
Mon Oct 10 18:26:13 2016 factoring 273160738474933720738173971888648839004631254638334337328524154446360567876622758785071571728319901973105785355719274809547476812666409616142295140388748295147911 (162 digits)
Mon Oct 10 18:26:14 2016 searching for 15-digit factors
Mon Oct 10 18:26:14 2016 commencing number field sieve (162-digit input)
Mon Oct 10 18:26:14 2016 R0: -12041698219452787828222430533215
Mon Oct 10 18:26:14 2016 R1: 3878686267217897
Mon Oct 10 18:26:14 2016 A0: -801873829760148253152787721272602612832
Mon Oct 10 18:26:14 2016 A1: 2224566967366623497028146775700034
Mon Oct 10 18:26:14 2016 A2: -80044704608077672201472505
Mon Oct 10 18:26:14 2016 A3: -160143293651048845430
Mon Oct 10 18:26:14 2016 A4: 424445477332
Mon Oct 10 18:26:14 2016 A5: 1078896
Mon Oct 10 18:26:14 2016 skew 1.00, size 9.540e-16, alpha -7.595, combined = 7.338e-15 rroots = 5
Mon Oct 10 18:26:14 2016
Mon Oct 10 18:26:14 2016 commencing relation filtering
Mon Oct 10 18:26:14 2016 setting target matrix density to 120.0
Mon Oct 10 18:26:14 2016 estimated available RAM is 16335.3 MB
Mon Oct 10 18:26:14 2016 commencing duplicate removal, pass 1
Mon Oct 10 18:37:11 2016 found 2088132 hash collisions in 48751631 relations
Mon Oct 10 18:37:58 2016 added 121684 free relations
Mon Oct 10 18:37:58 2016 commencing duplicate removal, pass 2
Mon Oct 10 18:38:14 2016 found 0 duplicates and 48873315 unique relations
Mon Oct 10 18:38:14 2016 memory use: 130.6 MB
Mon Oct 10 18:38:14 2016 reading ideals above 720000
Mon Oct 10 18:38:14 2016 commencing singleton removal, initial pass
Mon Oct 10 18:47:27 2016 memory use: 1378.0 MB
Mon Oct 10 18:47:27 2016 reading all ideals from disk
Mon Oct 10 18:47:28 2016 memory use: 1822.2 MB
Mon Oct 10 18:47:32 2016 keeping 47655605 ideals with weight <= 200, target excess is 260510
Mon Oct 10 18:47:36 2016 commencing in-memory singleton removal
Mon Oct 10 18:47:40 2016 begin with 48873315 relations and 47655605 unique ideals
Mon Oct 10 18:48:15 2016 reduce to 31186925 relations and 28172761 ideals in 13 passes
Mon Oct 10 18:48:15 2016 max relations containing the same ideal: 156
Mon Oct 10 18:48:27 2016 removing 2545192 relations and 2145192 ideals in 400000 cliques
Mon Oct 10 18:48:28 2016 commencing in-memory singleton removal
Mon Oct 10 18:48:30 2016 begin with 28641733 relations and 28172761 unique ideals
Mon Oct 10 18:48:48 2016 reduce to 28489826 relations and 25873166 ideals in 8 passes
Mon Oct 10 18:48:48 2016 max relations containing the same ideal: 146
Mon Oct 10 18:48:59 2016 removing 1935358 relations and 1535358 ideals in 400000 cliques
Mon Oct 10 18:49:00 2016 commencing in-memory singleton removal
Mon Oct 10 18:49:02 2016 begin with 26554468 relations and 25873166 unique ideals
Mon Oct 10 18:49:18 2016 reduce to 26455734 relations and 24237744 ideals in 8 passes
Mon Oct 10 18:49:18 2016 max relations containing the same ideal: 140
Mon Oct 10 18:49:29 2016 removing 1747211 relations and 1347211 ideals in 400000 cliques
Mon Oct 10 18:49:29 2016 commencing in-memory singleton removal
Mon Oct 10 18:49:31 2016 begin with 24708523 relations and 24237744 unique ideals
Mon Oct 10 18:49:46 2016 reduce to 24621400 relations and 22802190 ideals in 8 passes
Mon Oct 10 18:49:46 2016 max relations containing the same ideal: 133
Mon Oct 10 18:49:56 2016 removing 1648731 relations and 1248731 ideals in 400000 cliques
Mon Oct 10 18:49:56 2016 commencing in-memory singleton removal
Mon Oct 10 18:49:58 2016 begin with 22972669 relations and 22802190 unique ideals
Mon Oct 10 18:50:10 2016 reduce to 22889764 relations and 21469396 ideals in 7 passes
Mon Oct 10 18:50:10 2016 max relations containing the same ideal: 130
Mon Oct 10 18:50:19 2016 removing 1587476 relations and 1187476 ideals in 400000 cliques
Mon Oct 10 18:50:20 2016 commencing in-memory singleton removal
Mon Oct 10 18:50:21 2016 begin with 21302288 relations and 21469396 unique ideals
Mon Oct 10 18:50:31 2016 reduce to 21220100 relations and 20198462 ideals in 6 passes
Mon Oct 10 18:50:31 2016 max relations containing the same ideal: 122
Mon Oct 10 18:50:39 2016 removing 1547413 relations and 1147413 ideals in 400000 cliques
Mon Oct 10 18:50:40 2016 commencing in-memory singleton removal
Mon Oct 10 18:50:41 2016 begin with 19672687 relations and 20198462 unique ideals
Mon Oct 10 18:50:50 2016 reduce to 19588091 relations and 18965124 ideals in 6 passes
Mon Oct 10 18:50:50 2016 max relations containing the same ideal: 119
Mon Oct 10 18:50:58 2016 removing 1270445 relations and 949670 ideals in 320775 cliques
Mon Oct 10 18:50:58 2016 commencing in-memory singleton removal
Mon Oct 10 18:50:59 2016 begin with 18317646 relations and 18965124 unique ideals
Mon Oct 10 18:51:08 2016 reduce to 18257629 relations and 17954569 ideals in 6 passes
Mon Oct 10 18:51:08 2016 max relations containing the same ideal: 110
Mon Oct 10 18:51:18 2016 relations with 0 large ideals: 877
Mon Oct 10 18:51:18 2016 relations with 1 large ideals: 1646
Mon Oct 10 18:51:18 2016 relations with 2 large ideals: 26567
Mon Oct 10 18:51:18 2016 relations with 3 large ideals: 217043
Mon Oct 10 18:51:18 2016 relations with 4 large ideals: 971365
Mon Oct 10 18:51:18 2016 relations with 5 large ideals: 2625533
Mon Oct 10 18:51:18 2016 relations with 6 large ideals: 4509870
Mon Oct 10 18:51:18 2016 relations with 7+ large ideals: 9904728
Mon Oct 10 18:51:18 2016 commencing 2-way merge
Mon Oct 10 18:51:28 2016 reduce to 12487624 relation sets and 12184564 unique ideals
Mon Oct 10 18:51:28 2016 commencing full merge
Mon Oct 10 18:56:05 2016 memory use: 1541.7 MB
Mon Oct 10 18:56:07 2016 found 5546445 cycles, need 5540764
Mon Oct 10 18:56:07 2016 weight of 5540764 cycles is about 665556044 (120.12/cycle)
Mon Oct 10 18:56:07 2016 distribution of cycle lengths:
Mon Oct 10 18:56:07 2016 1 relations: 212491
Mon Oct 10 18:56:07 2016 2 relations: 361486
Mon Oct 10 18:56:07 2016 3 relations: 427322
Mon Oct 10 18:56:07 2016 4 relations: 433162
Mon Oct 10 18:56:07 2016 5 relations: 435079
Mon Oct 10 18:56:07 2016 6 relations: 415495
Mon Oct 10 18:56:07 2016 7 relations: 398937
Mon Oct 10 18:56:07 2016 8 relations: 372738
Mon Oct 10 18:56:07 2016 9 relations: 346611
Mon Oct 10 18:56:07 2016 10+ relations: 2137443
Mon Oct 10 18:56:07 2016 heaviest cycle: 28 relations
Mon Oct 10 18:56:08 2016 commencing cycle optimization
Mon Oct 10 18:56:17 2016 start with 48476129 relations
Mon Oct 10 18:58:23 2016 pruned 2348166 relations
Mon Oct 10 18:58:23 2016 memory use: 1260.5 MB
Mon Oct 10 18:58:23 2016 distribution of cycle lengths:
Mon Oct 10 18:58:23 2016 1 relations: 212491
Mon Oct 10 18:58:23 2016 2 relations: 372349
Mon Oct 10 18:58:23 2016 3 relations: 449388
Mon Oct 10 18:58:23 2016 4 relations: 453865
Mon Oct 10 18:58:23 2016 5 relations: 458658
Mon Oct 10 18:58:23 2016 6 relations: 435859
Mon Oct 10 18:58:23 2016 7 relations: 418818
Mon Oct 10 18:58:23 2016 8 relations: 387873
Mon Oct 10 18:58:23 2016 9 relations: 360227
Mon Oct 10 18:58:23 2016 10+ relations: 1991236
Mon Oct 10 18:58:23 2016 heaviest cycle: 28 relations
Mon Oct 10 18:58:30 2016 RelProcTime: 1936
Mon Oct 10 18:58:30 2016
Mon Oct 10 18:58:30 2016 commencing linear algebra
Mon Oct 10 18:58:30 2016 read 5540764 cycles
Mon Oct 10 18:58:42 2016 cycles contain 18087427 unique relations
Mon Oct 10 19:02:16 2016 read 18087427 relations
Mon Oct 10 19:02:58 2016 using 20 quadratic characters above 4294917295
Mon Oct 10 19:04:14 2016 building initial matrix
Mon Oct 10 19:08:41 2016 memory use: 2562.0 MB
Mon Oct 10 19:08:44 2016 read 5540764 cycles
Mon Oct 10 19:08:47 2016 matrix is 5540586 x 5540764 (2610.4 MB) with weight 797510289 (143.94/col)
Mon Oct 10 19:08:47 2016 sparse part has weight 617814500 (111.50/col)
Mon Oct 10 19:10:01 2016 filtering completed in 2 passes
Mon Oct 10 19:10:03 2016 matrix is 5540551 x 5540729 (2610.4 MB) with weight 797508771 (143.94/col)
Mon Oct 10 19:10:03 2016 sparse part has weight 617814212 (111.50/col)
Mon Oct 10 19:10:24 2016 matrix starts at (0, 0)
Mon Oct 10 19:10:26 2016 matrix is 5540551 x 5540729 (2610.4 MB) with weight 797508771 (143.94/col)
Mon Oct 10 19:10:26 2016 sparse part has weight 617814212 (111.50/col)
Mon Oct 10 19:10:26 2016 saving the first 48 matrix rows for later
Mon Oct 10 19:10:27 2016 matrix includes 64 packed rows
Mon Oct 10 19:10:29 2016 matrix is 5540503 x 5540729 (2535.5 MB) with weight 680102432 (122.75/col)
Mon Oct 10 19:10:29 2016 sparse part has weight 609249244 (109.96/col)
Mon Oct 10 19:10:29 2016 using block size 8192 and superblock size 589824 for processor cache size 6144 kB
Mon Oct 10 19:11:08 2016 commencing Lanczos iteration (4 threads)
Mon Oct 10 19:11:08 2016 memory use: 2099.2 MB
Mon Oct 10 19:11:32 2016 linear algebra at 0.0%, ETA 23h20m
Mon Oct 10 19:11:40 2016 checkpointing every 240000 dimensions
Tue Oct 11 18:55:35 2016 lanczos halted after 87617 iterations (dim = 5540501)
Tue Oct 11 18:55:42 2016 recovered 28 nontrivial dependencies
Tue Oct 11 18:55:45 2016 BLanczosTime: 86235
Tue Oct 11 18:55:45 2016
Tue Oct 11 18:55:45 2016 commencing square root phase
Tue Oct 11 18:55:45 2016 handling dependencies 1 to 64
Tue Oct 11 18:55:45 2016 reading relations for dependency 1
Tue Oct 11 18:55:46 2016 read 2773700 cycles
Tue Oct 11 18:55:52 2016 cycles contain 9050750 unique relations
Tue Oct 11 18:57:27 2016 read 9050750 relations
Tue Oct 11 18:58:11 2016 multiplying 9050750 relations
Tue Oct 11 19:07:12 2016 multiply complete, coefficients have about 507.29 million bits
Tue Oct 11 19:07:16 2016 initial square root is modulo 1265621771
Tue Oct 11 19:19:20 2016 sqrtTime: 1415
Tue Oct 11 19:19:20 2016 p77 factor: 14813859948176794675110534707275955189670373720855185070424541653252680580081
Tue Oct 11 19:19:20 2016 p86 factor: 18439538339806755445113621376882157325866635164300870539162919067697594327417049693431
Tue Oct 11 19:19:20 2016 elapsed time 24:53:07

[/CODE]

henryzz 2016-10-11 22:18

Based upon the log it looks like that may have fitted on 4gb. I think I am now set up with a new pc able to do a 16GB job. It isn't on 24/7 though so I would prefer to do a job that isn't more than about half a week. This would have been an ideal test.

pinhodecarlos 2016-10-11 22:51

[QUOTE=henryzz;444815]Based upon the log it looks like that may have fitted on 4gb. I think I am now set up with a new pc able to do a 16GB job. It isn't on 24/7 though so I would prefer to do a job that isn't more than about half a week. This would have been an ideal test.[/QUOTE]

Real value of memory used was 3.8 GB instead on the one presented on the log file.

EdH 2016-10-12 01:04

Excellent, Carlos,

That's a good split. I'm glad it didn't turn out to have a small factor my ECMing should have found.

Now it looks like another c162 has shown up, but it could be worse...

Ed

Note: I have two 4GB, core 2 quad machines that both said they could solve the LA. I could look up the values, but they also both said about 78 hours, I think. The odd part is that I set them up to use mpi and they came back at ~275 hours to solve. My three, 4gb dual core machines were 30 something hours. The quad cores are, unfortunately, on a 10/100 switch, while the three duals are on a Gigabit, currently.

I'll turn several machines loose on the new composite in a little bit...

henryzz 2016-10-12 01:18

[QUOTE=pinhodecarlos;444817]Real value of memory used was 3.8 GB instead on the one presented on the log file.[/QUOTE]

Quite an underestimate then.

EdH, just thought I would point out that you would save money by replacing your core 2 systems with more modern systems. You get much more efficiency with recent cpus. This would allow lower power consumption which would save you money in a year or two.
That is of course for the same performance.
Another benefit would be more memory.

VBCurtis 2016-10-12 02:47

[QUOTE=EdH;444824]Excellent, Carlos,

That's a good split. I'm glad it didn't turn out to have a small factor my ECMing should have found.

Now it looks like another c162 has shown up, but it could be worse...

Ed.[/QUOTE]

Someone posted a P48 for that C162, on line 1658 with a C187 presently. If no progress is made, I'll throw some t50-level curves at it tomorrow.

EdH 2016-10-12 03:30

[QUOTE=VBCurtis;444829]Someone posted a P48 for that C162, on line 1658 with a C187 presently. If no progress is made, I'll throw some t50-level curves at it tomorrow.[/QUOTE]
One of my machines did that while I wasn't looking. I have a few working on the c187, but I won't be watching over them during the night. I don't know if they will get anywhere. Thanks.

EdH 2016-10-12 03:33

[QUOTE=henryzz;444825]Quite an underestimate then.

EdH, just thought I would point out that you would save money by replacing your core 2 systems with more modern systems. You get much more efficiency with recent cpus. This would allow lower power consumption which would save you money in a year or two.
That is of course for the same performance.
Another benefit would be more memory.[/QUOTE]
Yeah, and I could probably afford the up front investment. But, in reality, I would get the new machine and then still run all these old ones, too. Most be an addiction... (I actually have several more old machines waiting for a chance to help.)

Edit: It's also an opportunity to play. One of the machines is running from an 8GB micro SDHC card. I also have a Raspberry Pi assigning some of the work.

EdH 2016-10-12 15:59

How odd! I let about a dozen machines work on the c187 overnight and they returned nothing. This morning I set up a machine to run a distributed ECM effort and it immediately gave me this:
[code]
-> ___________________________________________________________________
-> | Running ecm.py, a Python driver for distributing GMP-ECM work |
-> | on a single machine. It is Copyright, 2012, David Cleaver and |
-> | is a conversion of factmsieve.py that is Copyright, 2010, Brian |
-> | Gladman. Version 0.10 (Python 2.6 or later) 30th Sep 2012. |
-> |_________________________________________________________________|

-> Number(s) to factor:
-> 2097073091591237404218687836513685895877817960290253421046297596274623117183515885512940927331281174454391017370699311839113500789090430626739805618613433715612809988590674625990004910081 (187 digits)
->=============================================================================
-> Working on number: 209707309159123740...674625990004910081 (187 digits)
-> Currently working on: job6427.txt
-> Starting 4 instances of GMP-ECM...
-> ./ecm -c 8 2000 < job6427.txt > job6427_t00.txt
-> ./ecm -c 8 2000 < job6427.txt > job6427_t01.txt
-> ./ecm -c 7 2000 < job6427.txt > job6427_t02.txt
-> ./ecm -c 7 2000 < job6427.txt > job6427_t03.txt

GMP-ECM 7.0.3 [configured with GMP 6.1.1, --enable-asm-redc] [ECM]
Using B1=2000, B2=147396, polynomial x^1, 4 threads
Done 29/30; avg s/curve: stg1 0.010s, stg2 0.012s; runtime: 1s

Run 29 out of 30:
Using B1=2000, B2=147396, polynomial x^1, sigma=1:2423212582
Step 1 took 8ms
Step 2 took 12ms
********** Factor found in step 1: [B]111817691327273[/B]
Found prime factor of 15 digits: 111817691327273
Composite cofactor 18754394467449971433169133636162492600298914413396595751836384412408176818652556808225119789256080647070289939738286246593531841108089813728776853478001495732073616829270297 has 173 digits

waiting...
-> ___________________________________________________________________
-> | Running ecm.py, a Python driver for distributing GMP-ECM work |
-> | on a single machine. It is Copyright, 2012, David Cleaver and |
-> | is a conversion of factmsieve.py that is Copyright, 2010, Brian |
-> | Gladman. Version 0.10 (Python 2.6 or later) 30th Sep 2012. |
-> |_________________________________________________________________|

-> Number(s) to factor:
-> 2097073091591237404218687836513685895877817960290253421046297596274623117183515885512940927331281174454391017370699311839113500789090430626739805618613433715612809988590674625990004910081 (187 digits)
->=============================================================================
-> Working on number: 209707309159123740...674625990004910081 (187 digits)
-> Currently working on: job1932.txt
-> Starting 4 instances of GMP-ECM...
-> ./ecm -c 19 11000 < job1932.txt > job1932_t00.txt
-> ./ecm -c 19 11000 < job1932.txt > job1932_t01.txt
-> ./ecm -c 18 11000 < job1932.txt > job1932_t02.txt
-> ./ecm -c 18 11000 < job1932.txt > job1932_t03.txt

GMP-ECM 7.0.3 [configured with GMP 6.1.1, --enable-asm-redc] [ECM]
Using B1=11000, B2=1873422, polynomial x^1, 4 threads
Done 6/74; avg s/curve: stg1 0.043s, stg2 0.046s; runtime: 1s

Run 6 out of 74:
Using B1=11000, B2=1873422, polynomial x^1, sigma=1:4277626488
Step 1 took 36ms
********** Factor found in step 2: [B]1095649678256579[/B]
Found prime factor of 16 digits: 1095649678256579
Composite cofactor 131954905347588391934699304738827948133275634558635457624548533060467418907362122329509131668522067078677069743086471484822031174847391546400311 has 144 digits

waiting...
-> ___________________________________________________________________
-> | Running ecm.py, a Python driver for distributing GMP-ECM work |
-> | on a single machine. It is Copyright, 2012, David Cleaver and |
-> | is a conversion of factmsieve.py that is Copyright, 2010, Brian |
-> | Gladman. Version 0.10 (Python 2.6 or later) 30th Sep 2012. |
-> |_________________________________________________________________|

-> Number(s) to factor:
-> 2097073091591237404218687836513685895877817960290253421046297596274623117183515885512940927331281174454391017370699311839113500789090430626739805618613433715612809988590674625990004910081 (187 digits)
->=============================================================================
-> Working on number: 209707309159123740...674625990004910081 (187 digits)
-> Currently working on: job9509.txt
-> Starting 4 instances of GMP-ECM...
-> ./ecm -c 27 50000 < job9509.txt > job9509_t00.txt
-> ./ecm -c 27 50000 < job9509.txt > job9509_t01.txt
-> ./ecm -c 27 50000 < job9509.txt > job9509_t02.txt
-> ./ecm -c 26 50000 < job9509.txt > job9509_t03.txt

GMP-ECM 7.0.3 [configured with GMP 6.1.1, --enable-asm-redc] [ECM]
Using B1=50000, B2=12746592, polynomial x^2, 4 threads
Done 2/107; avg s/curve: stg1 0.219s, stg2 0.190s; runtime: 1s

Run 2 out of 107:
Using B1=50000, B2=12746592, polynomial x^2, sigma=1:3797496244
Step 1 took 196ms
********** Factor found in step 2: [B]129719656920613[/B]
Found prime factor of 15 digits: 129719656920613
Composite cofactor 16166193631506619062699986973017125397615941076923986434223011454064246619826586744066970275619259028143877483718271885648417579265271700432201498195640573216055668220689837 has 173 digits
[/code]:smile:

henryzz 2016-10-12 16:08

I would double check what you did overnight.

EdH 2016-10-12 20:42

[QUOTE=henryzz;444860]I would double check what you did overnight.[/QUOTE]
I am going to check more closely, because what I did was to set ali.pl loose on 3408 on nearly all of the 24/7 machines. They were busy running instances of YAFU against the c187 and were all on 2350 curves at 3e6 this morning, when I swapped them over to my distributed ECM scripts. My scripts allow me to choose a few less curves than (# of machines) * (YFAU's suggested curves). My scripts don't suto-send the factors to the db, though. That's why I used ali.pl overnight.

I'm going to reconstruct the exact composite and play with YAFU and ali.pl separately and see if I find something odd about them.

EdH 2016-10-12 22:11

[QUOTE=EdH;444879]...
I'm going to reconstruct the exact composite and play with YAFU and ali.pl separately and see if I find something odd about them.[/QUOTE]
It was my mis-reading of the outputs! I really must take better looks at things.

Basically, YAFU was asked by ali.pl to factor a c187 and YAFU hadn't factored it [B]totally[/B], so it didn't report anything. Had I looked at more than just the left side of the output, showing the curves completed, I would have seen the cSIZE had changed, which is the only indication that a factor has been found. At that point it doesn't say that a factor has been found, or print the new factor. It might print those details if more verbosity is signaled.

EdH 2016-10-13 01:29

Let's see if I learned anything or if I'm way off track...

If i try to figure out how much ecm to do on the c144, (0.31*144=~45). This means I should go for a t45 threshold.

All the little B1 values equate to very tiny percentages, so if I start at:
2390@3e6
3860@11e6
2000@43e6

I get just over 220% of t45, or just about 1/3 of t50.

Have I gotten any of this correct?

Running ecm-toy.py with the 2000@43e6, I am told to run 300 curves at 110e6.

I currently am passing 830@110e6, which, if I have things correct, adds 23% to the t50 value, taking it over 50% of t50.

Does this signal that enough ECM has been done?

Am I correct at all in my calculations?

If the above is correct, should I move into the gnfs work, or would someone else like to run this one?

VBCurtis 2016-10-13 02:56

I think your calcs are fine, and half a t50 is about 3*t45, so you're about triple the usual amount of ECM for a number this size. Head to GNFS!

Take it yourself, IMO; you've surely done enough work on the sequence recently to have the honor (?) of advancing it a couple more steps.

EdH 2016-10-13 03:29

Thanks!

I'll have one of my machines run a poly overnight while the rest keep running curves.

I'll look to see if anyone else chimes in by tomorrow morning and run with it, if not.

henryzz 2016-10-13 09:04

After something like a t35 is done I would then ask the script what it things. Maybe more of the curves should have been at 110e6 for this number.

pinhodecarlos 2016-10-13 09:37

Ed, please let me know if you need support from myself for the next LA. Currently I am on a job which will finish in 43 hours and already queued up the next job but between these two I can squeeze a LA to help you out. PM me if you are interested. Take care.

EdH 2016-10-13 13:09

[QUOTE=henryzz;444943]After something like a t35 is done I would then ask the script what it things. Maybe more of the curves should have been at 110e6 for this number.[/QUOTE]My scripts from long ago have a fixed pattern, but I don't know where I got that pattern. I suppose to be most efficient I should make a specific pattern for each job. I have the easy capability to set the curves and values in the main script. I just don't know how to make up the table. Would entering a low value into the ecm-toy script tell me the best amounts for each in any rough manner? I don't have the true data for the ecm-toy gnfs table, but would that matter very much at the digit levels I can run?

[QUOTE=pinhodecarlos;444946]Ed, please let me know if you need support from myself for the next LA. Currently I am on a job which will finish in 43 hours and already queued up the next job but between these two I can squeeze a LA to help you out. PM me if you are interested. Take care.[/QUOTE]Thanks Carlos, I can do this one, if everyone is patient enough to wait for my ancient hardware.:smile:-Ed

Overnight my machines finished 2000@110e6. They were supposed to start 2000@430e6, but my script had a bug that prevented the 430e6 runs from being assigned. My calculations come up to 90% of a t50 now. I have a poly and will be swapping everything over to gnfs. How is sieving range normally determined? Is it by testing or is there a calculated start point? I was guessing for the others but my guess here would be, maybe 15e6.

Thanks for all the "learnin'" everyone.

VBCurtis 2016-10-13 15:01

For GNFS jobs, I start sieving at 1/3rd of the factor base bound. The original factmsieve script started at 1/2 the bound, so somewhere in that range is reasonable.

If your tools don't supply a default factor base (factmsieve.py does) it doesn't matter very much (within a factor of two of "best", say) what factor-base bounds you pick; perhaps 30M for the bounds with sieving starting at 12M should work fine. 14e is probly best; if you want to play with a little test-sieving, try using 14e the regular way and also invoking with with "-J 12" in the flag list. The latter may be a bit faster, at the expense of less yield (and thus a larger range of Q to be sieved).

henryzz 2016-10-13 15:30

[QUOTE=EdH;444955]My scripts from long ago have a fixed pattern, but I don't know where I got that pattern. I suppose to be most efficient I should make a specific pattern for each job. I have the easy capability to set the curves and values in the main script. I just don't know how to make up the table. Would entering a low value into the ecm-toy script tell me the best amounts for each in any rough manner? I don't have the true data for the ecm-toy gnfs table, but would that matter very much at the digit levels I can run?

Thanks Carlos, I can do this one, if everyone is patient enough to wait for my ancient hardware.:smile:-Ed

Overnight my machines finished 2000@110e6. They were supposed to start 2000@430e6, but my script had a bug that prevented the 430e6 runs from being assigned. My calculations come up to 90% of a t50 now. I have a poly and will be swapping everything over to gnfs. How is sieving range normally determined? Is it by testing or is there a calculated start point? I was guessing for the others but my guess here would be, maybe 15e6.

Thanks for all the "learnin'" everyone.[/QUOTE]

It is difficult without the gnfs data matching the ecm data. It probably isn't too bad as long as you are within a factor of 2 of the best amount of ecm. These things usually have a fairly flat curve. There was the argument a while back whether it was better to tell aliqueit to do ecm to 0.25 digits or 0.33 digits. It didn't make that much difference.
You can run the script telling it you have run 0 curves (0@3e6 or something like that).

EdH 2016-10-14 03:08

[QUOTE=VBCurtis;444961]For GNFS jobs, I start sieving at 1/3rd of the factor base bound. The original factmsieve script started at 1/2 the bound, so somewhere in that range is reasonable.

If your tools don't supply a default factor base (factmsieve.py does) it doesn't matter very much (within a factor of two of "best", say) what factor-base bounds you pick; perhaps 30M for the bounds with sieving starting at 12M should work fine. 14e is probly best; if you want to play with a little test-sieving, try using 14e the regular way and also invoking with with "-J 12" in the flag list. The latter may be a bit faster, at the expense of less yield (and thus a larger range of Q to be sieved).[/QUOTE]
Unfortunately, I don't understand factor base. I am doing everything manually although I have some scripts I wrote to distribute sieving and ECMing among my machines. I have used factmsieve.py in the past and even modified a version to work with my cluster I had running then. I should probably resurrect that script...

I was out all day, so I didn't try the "-j 12" switch this time - maybe next.

[QUOTE=henryzz;444963]...
You can run the script telling it you have run 0 curves (0@3e6 or something like that).[/QUOTE]
Excellent! This works great. I will do this and use the returned values to seed my ECM script from now on.

On the good side, I was able to build a matrix with a little over 36M unique relations and I have LA running on my cluster. I should have the factors in the morning:
[code]
Thu Oct 13 22:32:44 2016 Msieve v. 1.53 (SVN 993)
Thu Oct 13 22:32:44 2016 random seeds: 78202805 bdd4a0ec
Thu Oct 13 22:32:44 2016 MPI process 0 of 3
Thu Oct 13 22:32:44 2016 factoring 131954905347588391934699304738827948133275634558635457624548533060467418907362122329509131668522067078677069743086471484822031174847391546400311 (144 digits)
Thu Oct 13 22:32:46 2016 searching for 15-digit factors
Thu Oct 13 22:32:47 2016 commencing number field sieve (144-digit input)
Thu Oct 13 22:32:47 2016 R0: -10853864674370357346890831166
Thu Oct 13 22:32:47 2016 R1: 2070175205206213
Thu Oct 13 22:32:47 2016 A0: 1567149971784611966940117631701000925
Thu Oct 13 22:32:47 2016 A1: 57830699450094690463301740533
Thu Oct 13 22:32:47 2016 A2: -399896828755837210648218
Thu Oct 13 22:32:47 2016 A3: -58194781578947499
Thu Oct 13 22:32:47 2016 A4: -6125301397
Thu Oct 13 22:32:47 2016 A5: 876
Thu Oct 13 22:32:47 2016 skew 5656895.57, size 6.878e-14, alpha -6.666, combined = 1.347e-11 rroots = 3
Thu Oct 13 22:32:47 2016
Thu Oct 13 22:32:47 2016 commencing linear algebra
Thu Oct 13 22:32:47 2016 initialized process (0,0) of 3 x 1 grid
Thu Oct 13 22:32:48 2016 read 2789376 cycles
Thu Oct 13 22:32:57 2016 cycles contain 8822764 unique relations
Thu Oct 13 22:34:58 2016 read 8822764 relations
Thu Oct 13 22:35:10 2016 using 20 quadratic characters above 4294917295
Thu Oct 13 22:35:51 2016 building initial matrix
Thu Oct 13 22:37:43 2016 memory use: 1208.9 MB
Thu Oct 13 22:37:46 2016 read 2789376 cycles
Thu Oct 13 22:37:46 2016 matrix is 2789288 x 2789376 (849.1 MB) with weight 269812589 (96.73/col)
Thu Oct 13 22:37:46 2016 sparse part has weight 189118354 (67.80/col)
Thu Oct 13 22:38:18 2016 filtering completed in 2 passes
Thu Oct 13 22:38:19 2016 matrix is 2786778 x 2786925 (848.9 MB) with weight 269715636 (96.78/col)
Thu Oct 13 22:38:19 2016 sparse part has weight 189092868 (67.85/col)
Thu Oct 13 22:38:41 2016 matrix starts at (0, 0)
Thu Oct 13 22:38:41 2016 matrix is 929002 x 2786925 (372.8 MB) with weight 144909846 (52.00/col)
Thu Oct 13 22:38:41 2016 sparse part has weight 64287078 (23.07/col)
Thu Oct 13 22:38:41 2016 saving the first 48 matrix rows for later
Thu Oct 13 22:38:42 2016 matrix includes 64 packed rows
Thu Oct 13 22:38:42 2016 matrix is 928954 x 2786925 (346.7 MB) with weight 89762927 (32.21/col)
Thu Oct 13 22:38:42 2016 sparse part has weight 63022146 (22.61/col)
Thu Oct 13 22:38:42 2016 using block size 8192 and superblock size 294912 for processor cache size 3072 kB
Thu Oct 13 22:38:46 2016 commencing Lanczos iteration (2 threads)
Thu Oct 13 22:38:46 2016 memory use: 245.8 MB
Thu Oct 13 22:39:05 2016 linear algebra at 0.1%, ETA 9h 9m
Thu Oct 13 22:39:11 2016 checkpointing every 310000 dimensions
[/code]

EdH 2016-10-14 13:01

Posted:
[code]
Fri Oct 14 08:43:00 2016 commencing square root phase
Fri Oct 14 08:43:00 2016 reading relations for dependency 1
Fri Oct 14 08:43:01 2016 read 1392884 cycles
Fri Oct 14 08:43:03 2016 cycles contain 4409660 unique relations
Fri Oct 14 08:43:50 2016 read 4409660 relations
Fri Oct 14 08:44:15 2016 multiplying 4409660 relations
Fri Oct 14 08:49:34 2016 multiply complete, coefficients have about 199.19 million bits
Fri Oct 14 08:49:35 2016 initial square root is modulo 14071643
Fri Oct 14 08:56:03 2016 sqrtTime: 783
Fri Oct 14 08:56:03 2016 p65 factor: 10034588955103289779660710855255598470263096964861235086632824637
Fri Oct 14 08:56:03 2016 p80 factor: 13150006037913501232049838541645294463485178998835505659359474149104392799427203
Fri Oct 14 08:56:03 2016 elapsed time 00:13:04
[/code]

unconnected 2016-10-14 13:09

200 digits! :w00t::shock::cmd:

LaurV 2016-10-15 05:19

yeah, and the graphic looks like it is taking off...
:cmd:

EdH 2016-10-15 14:37

And, this c193 doesn't look like anything I could factor easily, either.:sad:

I have gotten this far:
[code]
100 curves with B1 = 3.0e6
100 curves with B1 = 11.0e6
100 curves with B1 = 43.0e6
2950 curves with B1 = 110.0e6
[/code]

EdH 2016-10-17 01:52

I was doing some rewriting of some of my scripts because I noticed the output of the current ecm.py was different from the previous version I was using. Look what I missed (for over a day) until now:
[code]
Sat 2016/10/15 21:19:39 UTC Input number is 1078463040366178656513252069998372090413256075931238262978888504124259070814620248149829075144468079639889184988709479197684315697845414761896628106730638913843128410641236087479218230479393449 (193 digits)
Sat 2016/10/15 21:19:39 UTC Run 8 out of 20:
Sat 2016/10/15 21:19:39 UTC Using B1=110000000, B2=776278396540, polynomial Dickson(30), sigma=1:395501120
Sat 2016/10/15 21:19:39 UTC Step 1 took 424103ms
Sat 2016/10/15 21:19:39 UTC Step 2 took 166986ms
Sat 2016/10/15 21:19:39 UTC ********** Factor found in step 2: 32527652508015991688135353490624137200476491466810293
Sat 2016/10/15 21:19:39 UTC Found prime factor of 53 digits: 32527652508015991688135353490624137200476491466810293
Sat 2016/10/15 21:19:39 UTC Prime cofactor 33155268124572048438133784781669020585809197372591704083430677812985365877497549554655661184724402184424723745171751999724006991866553283493 has 140 digits
[/code]

schickel 2016-10-17 04:03

Ooh, nice catch!

unconnected 2016-10-22 20:40

Next blocker is c191, I've completed ECM up to t45 and 1000@43e6 on it.

EdH 2016-10-22 20:45

[QUOTE=unconnected;445552]Next blocker is c191, I've completed ECM up to t45 and 1000@43e6 on it.[/QUOTE]
I've currently got this much on it, as well:
[code]
240 @ 2e3
370 @ 11e3
749 @ 5e4
214 @ 25e4
120 @ 1e6
200 @ 3e6
100 @ 11e6
100 @ 43e6
4620 @ 11e7
1680 @ 85e7
[/code]

EdH 2016-10-23 00:52

With unconnected and mine, are we about at t55?

Are we looking at needing t60?

If the above, are we close enough to send for a poly yet?

VBCurtis 2016-10-23 01:18

Yes, I believe you're at roughly a t55. Using the old rule of thumb 0.31*size = t58 = 50% of a t60 = 3 * t55 is enough for GNFS 191, but the Bayesian tool should ask for quite a bit less.

Perhaps another 1000-1500 at 85e7 and call it good?

I can start poly select Tuesday, but I only have a few GPU-days free. A c191 deserves something like a GPU-month, feasible among the 3-5 likely forum folks who do poly select.

If you do a poly select run, please post your ranges/parameters searched; I reduce stage 1 norm far enough that msieve does NOT search just a slice, so I am likely to overlap work others do. Since a few folks like to search tiny A1, I'll start at 5M to reduce the chance of overlap.

EdH 2016-10-23 03:03

[QUOTE=VBCurtis;445568]Yes, I believe you're at roughly a t55. Using the old rule of thumb 0.31*size = t58 = 50% of a t60 = 3 * t55 is enough for GNFS 191, but the Bayesian tool should ask for quite a bit less.

Perhaps another 1000-1500 at 85e7 and call it good?[/QUOTE]Actually, it wants:
[code]
4200 curves with B1 = 850.0e6
4500 curves with B1 = 2.9e9
[/code]That will take a while to run, but I didn't know the comparison to your estimate for a poly, which is why I asked.

[QUOTE=VBCurtis;445568]I can start poly select Tuesday, but I only have a few GPU-days free. A c191 deserves something like a GPU-month, feasible among the 3-5 likely forum folks who do poly select.

If you do a poly select run, please post your ranges/parameters searched; I reduce stage 1 norm far enough that msieve does NOT search just a slice, so I am likely to overlap work others do. Since a few folks like to search tiny A1, I'll start at 5M to reduce the chance of overlap.[/QUOTE]This one looks like a true team effort. We might be over my head considerably and I'm not that short...

VBCurtis 2016-10-23 04:48

I'll contribute 500 @ 85e7 this week, and I'll wait until you're doing the 3e9 curves before starting poly select. You're not yet halfway through ECM, by either the old rule of thumb or the new tool (interesting the new tool asks for so much ECM!).

EdH 2016-10-23 12:58

[QUOTE=VBCurtis;445574]I'll contribute 500 @ 85e7 this week, and I'll wait until you're doing the 3e9 curves before starting poly select. You're not yet halfway through ECM, by either the old rule of thumb or the new tool (interesting the new tool asks for so much ECM!).[/QUOTE]
Thanks! It doesn't suggest much in the low end though:
[code]
100 curves with B1 = 3.0e6
100 curves with B1 = 11.0e6
100 curves with B1 = 43.0e6
4500 curves with B1 = 110.0e6
4200 curves with B1 = 850.0e6
4500 curves with B1 = 2.9e9
[/code]

henryzz 2016-10-23 17:02

The Bayesian tool will often ask for more ecm. The higher bounds mean a higher chance of finding a factor overall. Sometimes there may be a higher chance of smaller factors being missed but there will be less higher factors missed. More efficiency in finding factors means more ecm is worthwhile often.

EdH 2016-10-26 14:32

[QUOTE=VBCurtis;445574]I'll contribute 500 @ 85e7 this week,
...
[/QUOTE]
In case you're running these, I'm shifting over to 29e8. I've got just over 3800 reported, as of this morning, at 85e7 with many still assigned that haven't returned. I suppose you could stop at 3-400 and we'd still meet the 4200 suggested. I'll swap the machines still working on 85e7 over and manually get a final count of all of their progress. It might be another hundred.

Edit: I guess I had more stragglers than I thought. I've got about 3960 @ 85e7 for a final count. I've moved everything up. Sorry for the late notice if you've done a bunch of the 500...

VBCurtis 2016-10-26 15:08

OK, I'll get to 300 and then add a few at 3e9.

EdH 2016-10-26 17:09

[QUOTE=VBCurtis;445798]OK, I'll get to 300 and then add a few at 3e9.[/QUOTE]
That sounds good. All my machines are now running 29e8, but they will most assuredly be quite some time doing so. I may retask one or two against other interests. I've some maintenance to perform, as well...

EdH 2016-10-26 22:26

How disappointing:
[code]
-> ___________________________________________________________________
-> | Running ecm.py, a Python driver for distributing GMP-ECM work |
-> | on a single machine. It is copyright, 2011-2016, David Cleaver |
-> | and is a conversion of factmsieve.py that is Copyright, 2010, |
-> | Brian Gladman. Version 0.41 (Python 2.6 or later) 3rd Sep 2016 |
-> |_________________________________________________________________|

-> Number(s) to factor:
-> 29460893303338144751360360097976743017149046981259832053501450854362438285630845458294329588136961317820466603439061784124252469966169391148524869909406500896547611862071404959591325864761463 (191 digits)
->=============================================================================
-> Working on number: 294608933033381447...959591325864761463 (191 digits)
-> Found previous job file job8854.txt, will resume work...
-> *** Already completed 0 curves on this number...
-> *** Will run 20 more curves.
-> Currently working on: job8854.txt
-> Starting 4 instances of GMP-ECM...
-> ecm -c 5 -maxmem 250 2900000000 < job8854.txt > job8854_t00.txt
-> ecm -c 5 -maxmem 250 2900000000 < job8854.txt > job8854_t01.txt
-> ecm -c 5 -maxmem 250 2900000000 < job8854.txt > job8854_t02.txt
-> ecm -c 5 -maxmem 250 2900000000 < job8854.txt > job8854_t03.txt

GMP-ECM 7.0.3 [configured with GMP 6.1.1, --enable-asm-redc] [ECM]
GNU MP: Cannot allocate memory (size=67239952)
GNU MP: Cannot allocate memory (size=125894672)
GNU MP: Cannot allocate memory (size=537395216)

-> *** Error: unexpected return value: -1
[/code]It did keep trying many times.

So far, only this one is complaining, though...

VBCurtis 2016-10-26 22:49

Please post what B2 value (and k-value) ECM picks for 29e8 with maxmem of 250. That's quite a combination!

I'll post B2, k, and timings for unlimited memory once I have the data. Just fired up 100 3e9 curves.

EdH 2016-10-27 03:23

[QUOTE=VBCurtis;445832]Please post what B2 value (and k-value) ECM picks for 29e8 with maxmem of 250. That's quite a combination!

I'll post B2, k, and timings for unlimited memory once I have the data. Just fired up 100 3e9 curves.[/QUOTE]
I'm running ECM via ecm.py and can't find any k-value shown anywhere. All of my machines are using B2=80921447825410.

A lot of them (even with 1400 maxmem) are showing the "GNU MP: Cannot allocate memory (size=########)" message, but they aren't crashing. I tried to downsize to 2 threads @ 500 maxmem each on the aforementioned machine, but it's still "thinking" about it, so I don't have a result yet. I have others running with 450 per thread.

VBCurtis 2016-10-27 05:33

You'd have to invoke the -v flag for ECM to spit out the k value; that is, the number of pieces it splits B2 into. Not important, merely my curiosity; I've never probed how large k can get.

Default at 3e9 is k=2, B2=105e12, and peak memory use of 11.3GB. My machine reports 13400 sec for stage 1, 1900 sec for stage 2 (ECM 7.0.1).

Setting k=8 should result in nearly the same B2 with half the memory usage.

Perhaps you're running into a side effect that ECM estimates memory use quite a bit less than peak use. Invoking -v tells me ECM expects to use 8.93GB, but peak use is actually reported by ECM as 11.3GB.

EdH 2016-10-27 12:59

[QUOTE=VBCurtis;445840]You'd have to invoke the -v flag for ECM to spit out the k value; that is, the number of pieces it splits B2 into. Not important, merely my curiosity; I've never probed how large k can get.

Default at 3e9 is k=2, B2=105e12, and peak memory use of 11.3GB. My machine reports 13400 sec for stage 1, 1900 sec for stage 2 (ECM 7.0.1).

Setting k=8 should result in nearly the same B2 with half the memory usage.

Perhaps you're running into a side effect that ECM estimates memory use quite a bit less than peak use. Invoking -v tells me ECM expects to use 8.93GB, but peak use is actually reported by ECM as 11.3GB.[/QUOTE]
This number may be out of my reasonable reach. The most memory for any of my machines is 6GB total, running 4 threads. Here's an output from one of the others:
[code]
->=============================================================================
-> Working on number: 294608933033381447...959591325864761463 (191 digits)
-> Currently working on: job3602.txt
-> Starting 2 instances of GMP-ECM...
-> ecm -c 10 -maxmem 1400 2900000000 < job3602.txt > job3602_t00.txt
-> ecm -c 10 -maxmem 1400 2900000000 < job3602.txt > job3602_t01.txt

GMP-ECM 7.0.3 [configured with GMP 6.1.1, --enable-asm-redc] [ECM]
GNU MP: Cannot allocate memory (size=537395216)
Using B1=2900000000, B2=81051862041166, polynomial Dickson(30), 2 threads
____________________________________________________________________________
Curves Complete | Average seconds/curve | Runtime | ETA
-----------------|---------------------------|---------------|--------------
4 of 20 | Stg1 11479s | Stg2 6386s | 0d 21:48:26 | 1d 15:41:33
[/code]I suppose I could knock them all down to a single thread. The one machine crashing out with errors didn't work with two threads at 500 maxmem each.

Most of my machines are running headless and if I increase maxmem much more they stop talking to me.

I don't have time today to play with them much. But, maybe later I'll figure something more out... or, I'll move the really weak machines to something they can work with better. I might just not be up to the size of this composite.

VBCurtis 2016-10-27 14:06

I have a few machines with big-memory footprints. How about you run stage 1 and I run stage 2? The text files aren't large, can be emailed easily.

You'd invoke ecm with -save residues.txt and bounds 29e8 29e8. The second bound is B2; set equal to B1, ECM won't do any stage 2.

This allows you to make full use of your small-RAM machines, while I can do stage 2 in 1/3rd the time of your machine.

Also, note we're doing a t60 using big-bounds because the Bayesian tool says so; you can simply choose to do a t60 the old-fashioned way, say with B1 = 3e8.

EdH 2016-10-28 03:36

Let's try your suggestion. I got home a bit late, but am attempting to shift all my machines over to see where they are in the morning. I will be issuing the following to each:
[code]
python ecm.py -c 20 -save residues${USER}.txt 2900000000 2900000000 <ecmIn
[/code]This will result in a single residue file written to by all threads on a particular machine. Will this be OK, or does each thread need its own residue file? If this is OK, can I also combine residue files from multiple machines when I send them to you?

Thanks for helping me learn new parts of this.

VBCurtis 2016-10-28 04:08

I've no idea how residue files interact with the python script, but one file is just fine; the residue is simple text and contains info about the bound, so collecting multiple machine-outputs however you wish will be fine.

I'll still do my 300 at 85e7 and 100 at 29e8, and then as many stage 2 runs as you send me. I can do 40+ a day on my home machine; the rest of my big-memory machines are on campus, less convenient. If you could send a first batch of 100 or 200, I can get started while you generate the rest.

p.s. my email is my forum name at gmail.

EdH 2016-10-28 15:03

I have sent the first set (only 31) to see if I have this right.

Thanks.

VBCurtis 2016-10-28 22:06

Stage 2 running fine. i5 Haswell at 3.1Ghz reports 1280 sec at default B2, 1670 sec when B2 increased 50%. Three curves ran at default B2 = 105e12, the rest are at B2 = 157e12. Peak memory usage reported at 11GB, though Windows' task manager reported 9600MB on this 12GB system.


All times are UTC. The time now is 21:00.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.