![]() |
The Bayesian tool will often ask for more ecm. The higher bounds mean a higher chance of finding a factor overall. Sometimes there may be a higher chance of smaller factors being missed but there will be less higher factors missed. More efficiency in finding factors means more ecm is worthwhile often.
|
[QUOTE=VBCurtis;445574]I'll contribute 500 @ 85e7 this week,
... [/QUOTE] In case you're running these, I'm shifting over to 29e8. I've got just over 3800 reported, as of this morning, at 85e7 with many still assigned that haven't returned. I suppose you could stop at 3-400 and we'd still meet the 4200 suggested. I'll swap the machines still working on 85e7 over and manually get a final count of all of their progress. It might be another hundred. Edit: I guess I had more stragglers than I thought. I've got about 3960 @ 85e7 for a final count. I've moved everything up. Sorry for the late notice if you've done a bunch of the 500... |
OK, I'll get to 300 and then add a few at 3e9.
|
[QUOTE=VBCurtis;445798]OK, I'll get to 300 and then add a few at 3e9.[/QUOTE]
That sounds good. All my machines are now running 29e8, but they will most assuredly be quite some time doing so. I may retask one or two against other interests. I've some maintenance to perform, as well... |
How disappointing:
[code] -> ___________________________________________________________________ -> | Running ecm.py, a Python driver for distributing GMP-ECM work | -> | on a single machine. It is copyright, 2011-2016, David Cleaver | -> | and is a conversion of factmsieve.py that is Copyright, 2010, | -> | Brian Gladman. Version 0.41 (Python 2.6 or later) 3rd Sep 2016 | -> |_________________________________________________________________| -> Number(s) to factor: -> 29460893303338144751360360097976743017149046981259832053501450854362438285630845458294329588136961317820466603439061784124252469966169391148524869909406500896547611862071404959591325864761463 (191 digits) ->============================================================================= -> Working on number: 294608933033381447...959591325864761463 (191 digits) -> Found previous job file job8854.txt, will resume work... -> *** Already completed 0 curves on this number... -> *** Will run 20 more curves. -> Currently working on: job8854.txt -> Starting 4 instances of GMP-ECM... -> ecm -c 5 -maxmem 250 2900000000 < job8854.txt > job8854_t00.txt -> ecm -c 5 -maxmem 250 2900000000 < job8854.txt > job8854_t01.txt -> ecm -c 5 -maxmem 250 2900000000 < job8854.txt > job8854_t02.txt -> ecm -c 5 -maxmem 250 2900000000 < job8854.txt > job8854_t03.txt GMP-ECM 7.0.3 [configured with GMP 6.1.1, --enable-asm-redc] [ECM] GNU MP: Cannot allocate memory (size=67239952) GNU MP: Cannot allocate memory (size=125894672) GNU MP: Cannot allocate memory (size=537395216) -> *** Error: unexpected return value: -1 [/code]It did keep trying many times. So far, only this one is complaining, though... |
Please post what B2 value (and k-value) ECM picks for 29e8 with maxmem of 250. That's quite a combination!
I'll post B2, k, and timings for unlimited memory once I have the data. Just fired up 100 3e9 curves. |
[QUOTE=VBCurtis;445832]Please post what B2 value (and k-value) ECM picks for 29e8 with maxmem of 250. That's quite a combination!
I'll post B2, k, and timings for unlimited memory once I have the data. Just fired up 100 3e9 curves.[/QUOTE] I'm running ECM via ecm.py and can't find any k-value shown anywhere. All of my machines are using B2=80921447825410. A lot of them (even with 1400 maxmem) are showing the "GNU MP: Cannot allocate memory (size=########)" message, but they aren't crashing. I tried to downsize to 2 threads @ 500 maxmem each on the aforementioned machine, but it's still "thinking" about it, so I don't have a result yet. I have others running with 450 per thread. |
You'd have to invoke the -v flag for ECM to spit out the k value; that is, the number of pieces it splits B2 into. Not important, merely my curiosity; I've never probed how large k can get.
Default at 3e9 is k=2, B2=105e12, and peak memory use of 11.3GB. My machine reports 13400 sec for stage 1, 1900 sec for stage 2 (ECM 7.0.1). Setting k=8 should result in nearly the same B2 with half the memory usage. Perhaps you're running into a side effect that ECM estimates memory use quite a bit less than peak use. Invoking -v tells me ECM expects to use 8.93GB, but peak use is actually reported by ECM as 11.3GB. |
[QUOTE=VBCurtis;445840]You'd have to invoke the -v flag for ECM to spit out the k value; that is, the number of pieces it splits B2 into. Not important, merely my curiosity; I've never probed how large k can get.
Default at 3e9 is k=2, B2=105e12, and peak memory use of 11.3GB. My machine reports 13400 sec for stage 1, 1900 sec for stage 2 (ECM 7.0.1). Setting k=8 should result in nearly the same B2 with half the memory usage. Perhaps you're running into a side effect that ECM estimates memory use quite a bit less than peak use. Invoking -v tells me ECM expects to use 8.93GB, but peak use is actually reported by ECM as 11.3GB.[/QUOTE] This number may be out of my reasonable reach. The most memory for any of my machines is 6GB total, running 4 threads. Here's an output from one of the others: [code] ->============================================================================= -> Working on number: 294608933033381447...959591325864761463 (191 digits) -> Currently working on: job3602.txt -> Starting 2 instances of GMP-ECM... -> ecm -c 10 -maxmem 1400 2900000000 < job3602.txt > job3602_t00.txt -> ecm -c 10 -maxmem 1400 2900000000 < job3602.txt > job3602_t01.txt GMP-ECM 7.0.3 [configured with GMP 6.1.1, --enable-asm-redc] [ECM] GNU MP: Cannot allocate memory (size=537395216) Using B1=2900000000, B2=81051862041166, polynomial Dickson(30), 2 threads ____________________________________________________________________________ Curves Complete | Average seconds/curve | Runtime | ETA -----------------|---------------------------|---------------|-------------- 4 of 20 | Stg1 11479s | Stg2 6386s | 0d 21:48:26 | 1d 15:41:33 [/code]I suppose I could knock them all down to a single thread. The one machine crashing out with errors didn't work with two threads at 500 maxmem each. Most of my machines are running headless and if I increase maxmem much more they stop talking to me. I don't have time today to play with them much. But, maybe later I'll figure something more out... or, I'll move the really weak machines to something they can work with better. I might just not be up to the size of this composite. |
I have a few machines with big-memory footprints. How about you run stage 1 and I run stage 2? The text files aren't large, can be emailed easily.
You'd invoke ecm with -save residues.txt and bounds 29e8 29e8. The second bound is B2; set equal to B1, ECM won't do any stage 2. This allows you to make full use of your small-RAM machines, while I can do stage 2 in 1/3rd the time of your machine. Also, note we're doing a t60 using big-bounds because the Bayesian tool says so; you can simply choose to do a t60 the old-fashioned way, say with B1 = 3e8. |
Let's try your suggestion. I got home a bit late, but am attempting to shift all my machines over to see where they are in the morning. I will be issuing the following to each:
[code] python ecm.py -c 20 -save residues${USER}.txt 2900000000 2900000000 <ecmIn [/code]This will result in a single residue file written to by all threads on a particular machine. Will this be OK, or does each thread need its own residue file? If this is OK, can I also combine residue files from multiple machines when I send them to you? Thanks for helping me learn new parts of this. |
| All times are UTC. The time now is 09:57. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.