![]() |
[QUOTE=PageFault;373824]I want to go extreme and have decided on 64 GB of ram. Who knows, perhaps I can find a monster factor in about a year or so..[/QUOTE]
A i7 4770k doesn't support that much ram! Max Memory Size for Haswell is 32GB according to Intel: [URL]http://ark.intel.com/products/75123[/URL] IvyBridge-E (for instance an i7 4930k) does support up to 64GB: [URL]http://ark.intel.com/products/77780[/URL] |
[QUOTE=Mini-Geek;373706]An ECM with the following bounds:
ECM2=1,2,9195881,-1,50000,5000000,150 Takes up to 10177 MB of memory in stage 2 (I allowed 12000 MB, that's all it took). Stage 1 took about 50 minutes of CPU time, and stage 2 took around 25-30 minutes. If you have 8 ECM processes running, you'll have approximately 3 in stage 2 at once (at this speed), using up to ~30GB. So having 32 GB of RAM for ECM/P-1 doesn't sound completely wasteful. Note that I did this test with just one core doing ECM, so it's probably not taxing memory bandwidth, another big factor. For ECM/P-1, I think I'd rather have 8GB of DDR3 2400 memory than 32GB of DDR3 1600 memory. (I could be wrong about that, but that's my guess)[/QUOTE] Mini-Geek- ECM bandwidth is very gentle- I use GMP-ECM as hyper-thread tasks to fill the i7 when LLR or NFS matrix-solving is nearly filling the CPU and bandwidth on the first four threads, and get "free" work because the balance of ECM low-bandwidth and LLR high-need works out. Memory speed shouldn't matter here, provided one chooses 1600 or up. PF- Having less memory leads ECM (not P-1!) to use slightly slower settings, rather than overlooking factors. Stage 2 can be broken into pieces to fit into whatever memory is available (within reason). This causes a small hit to stage 2 time only- stage 1 is not memory-intensive, and the two stages do not effect each other's times. So, you don't need 64GB (or even 32!) to chase massive factors, though extra memory may get you a small speedup- say, 30% of stage 2 time for a doubling of memory when using large B2, when stage 2 is less time than stage 1. You should get a copy of GMP-ECM and read the readme file for information about what B1 and B2 bounds are usually used to find what size factor. Note that Prime95 does not use GMP-ECM default B2, due in part to memory requirements. Also, notice that the readme discusses factors by digit length, where Mersenne folks often refer to bit length. A 30-digit factor is close to 100 bits! GMP-ECM is not the most useful program for finding Mersenne factors- I am not recommending the program, just the readme. So, if power use isn't an issue, you're better off getting 16GB instead of 64, and building a cheap i5-8GB system with the $600 savings. You can always add a second pair of 8GB sticks in a year when they're half the price. |
[QUOTE=PageFault;373824]
ok I get part of it - choose low exponents and up the B1 / B2 bounds, in proportion to what I am seeing and then register the exponents. What about the last argument? Will the server automatically adjust this at registration, or do I have to do it (i.e. how)? I want to go extreme and have decided on 64 GB of ram. Who knows, perhaps I can find a monster factor in about a year or so. [/QUOTE] To expand my previous answer a bit more and to be more precise: If you want to test the low exponents I mentioned, you get them manually (Manual Testing -> Assignments), and then you choose ECM as work type, and enter in the "Optional exponent range" the exponent you have previously chosen. From my experience, the program will hand you the exponent with the B1/B2 already set, according to how far the exponent has been tried before. So don´t fiddle with whatever numbers are there. For these small exponents, the number of curves prescribed by the program usually defaults to 150. If you want to run more (or less) curves, just edit that part of the worktodo.txt line. As for the memory, note that the FFT size used varies largely with the size of the exponent, and so does the memory needed. For exponents under 500K the memory used, even in stage 2, is not significantly large (I think under 1 Gig). The running time depends both on the FFT size and the bounds used. As an example, I have done a large number of curves on M5503, with B1 bound 44000000, and each curve was taking roughly the same time than on M2423 with B1=110000000. The smaller FFT effect was nearly "cancelled" by the higher value of B1. To see what exponent suits you best have a look at the ECM Progress page, and pick one, based on the above considerations and your particular preference. From that page you will be able to tell how "far" the exponents have been tried. Note in the column headers that the corresponding factor size in digits (not bits!) is indicated. So the algorythm is progressing from the smaller to the larger factors. One last thing: you may reserve several instances of the same exponent, and run each one on a different core. The chance of finding a factor is closely tied to the number of curves run, so ECM tests are ideal candidates to be parallelized. Once one of the workers finds a factor, you may stop them all. Job done, and move somewhere else. Finding very large exponents with ECM, with the current state of the hardware available, only seems possible for very small exponents, where small FFT sizes are used, allowing us to search through higher B1 bounds within a reasonable time limit. Even a 9M exponent would take a huge amount of time to a complete search up to B1=1000000, as the number of curves required is substantial. You may try this for yourself. That´s why I insisted on small exponents |
Thanks guys.
I shall try to digest all of this over the weekend - my brain is a bit dizzy from trying to open two mines simultaneous - it resembles the task of Sisyphus. Crunch on, PF |
OK - that's helpful.
Is there a difference between the two, except the newer one being unable to support ram (!!!??? WTF) What is the ideal stick configration - I notice the boarts all have 8 slots? 8 X 8? Cheers, PF [QUOTE=VictordeHolland;373828]A i7 4770k doesn't support that much ram! Max Memory Size for Haswell is 32GB according to Intel: [URL]http://ark.intel.com/products/75123[/URL] IvyBridge-E (for instance an i7 4930k) does support up to 64GB: [URL]http://ark.intel.com/products/77780[/URL][/QUOTE] |
ECM2=F8FC338FC24CE5D099185B29AD7218DD,1,2,501013,-1,250000,25000000,150
OK - now what do I modify? I got this manually - I don't think I can stick in worktodo without changing setting to manual? Otherwise I have to clear out all existing assignments. Test=N/A before the ECM2? |
No change, just stick it to worktodo. Do a copy file in case something went wrong and P95 deletes it, but for me it looks like a pretty valid assignment. The "test=" part is for LL tests only, you don't need it here.
|
[QUOTE=PageFault;374166]ECM2=xx,1,2,501013,-1,250000,25000000,150[/QUOTE]
It's not too important for ECM reservations, but FYI: you shouldn't post your assignment IDs publicly (that's the long hex number at the start of worktodo items that PrimeNet gives you). This can let people steal your assignments. With ECM, I could probably get an identical assignment (i.e. same exponent, same bounds) if I wanted to, but the same is not true of LLs. |
[QUOTE=PageFault;374166]ECM2=F8FC338FC24CE5D099185B29AD7218DD,1,2,501013,-1,250000,25000000,150
[/QUOTE] If you don't mind satisfying my curiosity, can you report chip GHz/time per curve/memory used during stage 2? 250,000 is the B1 value, which means these settings are intended to search for 30-digit (nearly 100 bit!) factors. 150 at the end is number of curves it will try for this number. That's less than the number needed to have reason to step up to a higher B1 and digit range, which is why someone else could get the identical reservation without wasting effort. Stepping up to B1 = 1M should use about double the memory in stage 2; that will determine whether you do a lengthy effort at 250,000 vs taking a few exponents into the 1M tests that are best for finding 35-digit factors. |
I forgot about the assignment key - I was on the sidelines for a couple years, unable to replace my obsolete junk - an antique copper plated calculator from Taiwan, or electronic abacus. I'll slightly change parameters, so that nobody can poach it (why anyone would is beyond me, not like this kind of work is in shortage).
vbcurtis: I'll likely build my personal machine and then run the small M tests. I have the 500000M above and also a 50000M. CPU will be an i7 with HT and it will do these ECM tests exclusively. I'll get the DDR3 2400, what is better for this custom build - 8 x 4 GB or 2 x 16? Once the OC is settled (It will do 20 triplechecks per core, small M and they must [B]all[/B] match database) I will post the stage 2 details for you. |
[QUOTE=PageFault;374184]vbcurtis:
I'll likely build my personal machine and then run the small M tests. I have the 500000M above and also a 50000M. CPU will be an i7 with HT and it will do these ECM tests exclusively. I'll get the DDR3 2400, what is better for this custom build - 8 x 4 GB or 2 x 16? Once the OC is settled (It will do 20 triplechecks per core, small M and they must [B]all[/B] match database) I will post the stage 2 details for you.[/QUOTE] Nobody would poach this work- he was ensuring you don't do that for a future LLR reservation. I was asking about your current machine because I have an old Athlonx2 with 2GB that I might be interested in trying ECM work like this. |
| All times are UTC. The time now is 14:46. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.