![]() |
Just to brag a bit: I actually had a successful factoring session with Colab using the GPU branch of GMP-ECM for stage 1 and a local machine for stage 2.
It was only a 146 digit number and it took quite a while, but still, it worked! Colab connected me to a T4 which gave me 2560 cores on which I ran stage 1, with the -save option. The local machine "watched" for the residue file, using the tunneling setup by chalsall, described elsewhere. The local machine used ecm.py by WraithX to run stage 2. A minor session, but it proved the concept.:smile: |
GMP-ECM has the option [C]-one[/C] to tell ECM to stop after the first factor is found. But, when running a GPU, stage 2 is performed on all the residues from stage 1, instead of stopping when a factor is found. Since GMP-ECM seems to still be single* threaded, with lots of cores, it takes a lot longer than it needs to. I can use external separate programs, such as ecm.py, but my scripts would be even more complicated.
Any help? *I had thought at some point, that GMP-ECM introduced multi-threading, but I can't find anything about it. Memory fluctuations? |
For P-1 stage 2, GMP-ECM can be configured to use OpenMP. Everything else is single threaded.
|
[QUOTE=kruoli;601259]For P-1 stage 2, GMP-ECM can be configured to use OpenMP. Everything else is single threaded.[/QUOTE]
Thanks Oliver! Maybe that's what I had seen and my memory is a bit foggy. |
There's also some code under [URL="https://gitlab.inria.fr/zimmerma/ecm/-/blob/master/multiecm.c"]multiecm.c[/URL] in gmp-ecm. I've never used it (I prefer Wraith's ecm.py), the header is
[CODE] /* multiecm.c - ECM with many curves with many torsion and/or in parallel Author: F. Moraino */[/CODE] But it doesn't look like it's been worked on in 9 years. |
[QUOTE=SethTro;601284]There's also some code under [URL="https://gitlab.inria.fr/zimmerma/ecm/-/blob/master/multiecm.c"]multiecm.c[/URL] in gmp-ecm. I've never used it (I prefer Wraith's ecm.py), the header is
[CODE] /* multiecm.c - ECM with many curves with many torsion and/or in parallel Author: F. Moraino */[/CODE]But it doesn't look like it's been worked on in 9 years.[/QUOTE]Thanks! I also use ecm.py for stage 2 external work, but it gets pretty complicated in my scripts. |
[QUOTE=EdH;601201]
*I had thought at some point, that GMP-ECM introduced multi-threading, but I can't find anything about it. Memory fluctuations?[/QUOTE] From NEWS: [quote] Changes between GMP-ECM 6.4.4 and GMP-ECM 7.0: * GMP-ECM is now thread-safe. In particular the "ecmfactor" binary can be called with say -t 17 to use 17 threads. [/quote] I think I looked at it once but didn't find it useful. I don't think it can be used for doing stage 2 after a GPU has done stage 1. |
That seems familiar! I'm sure that's what I was thinking of. Thanks for finding it!
My next issue is what you reference. I'm currently sending residues to a second machine while tasking the GPU machine with the next level of B1. But, if stage 2 is successful on the second machine, I still need to wait for the GPU to finish its current B1. I've tried [C]pkill ecm[/C], but it doesn't seem to do anything at the call. |
[QUOTE=EdH;601331]I've tried [C]pkill ecm[/C], but it doesn't seem to do anything at the call.[/QUOTE]
Try [C]pkill -1 ecm[/C]. |
[QUOTE=Gimarel;601334]Try [C]pkill -1 ecm[/C].[/QUOTE]I believe that is working. Thanks!
|
[QUOTE=EdH;601201]GMP-ECM has the option [C]-one[/C] to tell ECM to stop after the first factor is found. But, when running a GPU, stage 2 is performed on all the residues from stage 1, instead of stopping when a factor is found. Since GMP-ECM seems to still be single* threaded, with lots of cores, it takes a lot longer than it needs to. I can use external separate programs, such as ecm.py, but my scripts would be even more complicated.
Any help? *I had thought at some point, that GMP-ECM introduced multi-threading, but I can't find anything about it. Memory fluctuations?[/QUOTE] I'm happy to look at this as a bug. I vaguely remember that I wasn't sure if I should always stop or only if the cofactor is composite. |
| All times are UTC. The time now is 04:22. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.