![]() |
An HP Z600 workstation with dual 6-core Xeons is around $400 shipped. That's enough hardware to crack GNFS-160 in ~10 days. There are better solutions for similar money, I'm sure; I just happen to have purchased a couple of these over the years.
|
[QUOTE=swellman;526570]FWIW, I plan on working the list [url=http://www.chiark.greenend.org.uk/ucgi/~twomack/homcun.pl?sortby=gnfs&show=all]here[/url], focusing on all GNFS < 150 and SNFS < 200. I hope others will join me in the effort.
I figure if I don’t have hardware to shift boulders, at least I can shovel some gravel.[/QUOTE] All help is welcome, whether of the boulder-shifting or gravel-shoveling variety. If your preferred method of doing the latter is to run NFS on the smaller problems, then that's great. I'll just mention that you can also help prepare the boulders by shoveling gravel in a different way. As I've said, there are lots of harder composites which need more ECM before they move to NFS. I'll be keeping an ECMnet server filled with those, for anybody who wants to use it. Details in [URL="https://mersenneforum.org/showthread.php?p=526452#post526452"]this post[/URL]. |
[QUOTE=R.D. Silverman;526585]Where does one get 128 cores for just £2000? !!! I have priced dual mobo Xeon
based PC's and they cost a lot more than £2000. And they only had 48 cores (plus hyperthreading). [24 cores per Xeon]. Of course to keep the cores busy one would need >=2G/core of DRAM. I am running on a single i7 with 6 cores (plus hyperthreading). Of course, the h/w is 6 years old. Currently when I run NFS it runs a separate process on each core. [The code was written when multi-core CPU's were not yet available]. When I rewrite my siever I plan to do a pthread implementation so the threads share a lot of common data and thus cut down on memory requirements.[/QUOTE] 128 *threads*; four boxes each with two sockets each with eight cores, second-hand from [url]https://www.bargainhardware.co.uk/foxconn-1u-lga2011-cloud-server-configure-to-order[/url]. They're a lot cheaper now than when I bought them, £250 with 64GB RAM or £600 with 256GB. To be fair I also had to buy a 10Gbit network switch which was about £600, since the only network connection they have is 10Gbit SFP; or you could just use USB-to-Ethernet converters - those are definitely fast enough if you just want to transfer relations as you sieve, and 10Gbit Ethernet isn't fast enough to make linear algebra over MPI attractive) |
[QUOTE=swellman;526570]FWIW, I plan on working the list [URL="http://www.chiark.greenend.org.uk/ucgi/~twomack/homcun.pl?sortby=gnfs&show=all"]here[/URL], focusing on all GNFS < 150 and SNFS < 200. I hope others will join me in the effort.
I figure if I don’t have hardware to shift boulders, at least I can shovel some gravel.[/QUOTE] I will look at the list starting with GNFS difficulty >150 for a little while and reserve there as I "play." Have these been ECMed and therefore ready for GNFS or should I perform some ECM? I saw on Paul [SIZE=2]Leyland's[/SIZE][SIZE=2] "Homogeneous Cunningham numbers[/SIZE][SIZE=2]" page that factors should be submitted to "Jon" via email, in addition to [/SIZE]reporting them on the Reservation page. Neither submit factors to the other? If I include submitting them to factordb, then I should report them to three separate places? |
[QUOTE=EdH;526649]I will look at the list starting with GNFS difficulty >150 for a little while and reserve there as I "play." Have these been ECMed and therefore ready for GNFS or should I perform some ECM?
I saw on Paul [SIZE=2]Leyland's[/SIZE][SIZE=2] "Homogeneous Cunningham numbers[/SIZE][SIZE=2]" page that factors should be submitted to "Jon" via email, in addition to [/SIZE]reporting them on the Reservation page. Neither submit factors to the other? If I include submitting them to factordb, then I should report them to three separate places?[/QUOTE] The best way to see how much ECM a number has had is to consult the ECMnet server at ecm.unshlump.com:8194. It is reasonably up to date, with the exception of some work done by Bob that we're still trying to nail down. You can access it via HTTP, but some browsers will complain because the ECMnet server software uses an old version of HTTP. You can use curl or wget, though. Alternatively, you can always ask here or by email about any particular number. Right now all composites in the tables have received at least 3950 curves with a B1 of 43e6 (i.e. about half a t50). Many have received significantly more than this. Paul no longer runs the project, and his page is out of date. Consult the first post in this thread for the correct location. My suggestion is to report your factor on the reservation page, then send email to me (Jon) or post in this thread. I will handle updating the tables and factorDB. In fact, you don't even have to report on the reservation page if you don't want; if I receive a report, I will handle that as well. |
[QUOTE=jyb;526653]The best way to see how much ECM a number has had is to consult the ECMnet server at ecm.unshlump.com:8194. It is reasonably up to date, with the exception of some work done by Bob that we're still trying to nail down. You can access it via HTTP, but some browsers will complain because the ECMnet server software uses an old version of HTTP. You can use curl or wget, though. Alternatively, you can always ask here or by email about any particular number. Right now all composites in the tables have received at least 3950 curves with a B1 of 43e6 (i.e. about half a t50). Many have received significantly more than this.
Paul no longer runs the project, and his page is out of date. Consult the first post in this thread for the correct location. My suggestion is to report your factor on the reservation page, then send email to me (Jon) or post in this thread. I will handle updating the tables and factorDB. In fact, you don't even have to report on the reservation page if you don't want; if I receive a report, I will handle that as well.[/QUOTE]Thanks! I will try to get something running in the next couple of days, or so. . . |
Comparison of criteria
[CODE]* SNFS polynomials should have leading coefficients < 10^5
* SNFS tasks with degree 6 polynomials (preferred) should have 225 <= difficulty <= 250, ECM to 2/9 of SNFS difficulty * SNFS degree 5 tasks should have 210 <= difficulty <= 235, ECM to 2/9 of (15+SNFS difficulty) * SNFS degree 4 tasks should have 195 <= difficulty <= 220, ECM to 2/9 of (30+SNFS difficulty) * GNFS tasks should have 155 <= difficulty <= 170, ECM to 2/7 of GNFS difficulty (a bit more than that for difficulty close to 170). [/CODE] [QUOTE=jyb;526510]Yes, this is a good question. I have been operating on the following guidelines: - Lower limit for SNFS quintics/sextics is 230. - Lower limit for SNFS quartics is about 205. - Lower limit for GNFS is 150. For 14e: - Upper limit for SNFS quintics/sextics is low 250's. - Upper limit for SNFS quartics is about 220 (14e can do higher, but 15e is likely to be more efficient). - Upper limit for GNFS is around 175. I don't have enough experience with 15e to have come up with good upper limits for it, but that doesn't really concern us here. There are plenty of HCN composites which are clearly appropriate for 15e. But the question is how much ECM should these have prior to adding to the 15e queue. Any number which is within the 14e limits I describe above can probably be queued after a t55. (Maybe the largest of the GNFS jobs should have a little more ECM first.) However, I believe that anything above those limits (i.e. pretty much anything which is more appropriate for 15e) should get more before starting NFS. If you disagree, please give a specific amount of ECM which you think is appropriate for given digit counts/difficulties. Tom gave [URL="https://mersenneforum.org/showthread.php?p=525617#post525617"]this metric[/URL] recently, which seems pretty good, but of course it requires empirical data on runtimes. Do you have a better one? [/QUOTE] While I love the idea of a revised 14e GNFS difficulty range of 150-175, I would suggest an ECM test level equal to 2/7 for 150-165, and a 0.31 for 165-175. This accounts for the extra time required for poly search and test sieving for the harder numbers. Pretty simple model, others may have better/more sophisticated suggestions. For the quartics, debroulx suggested a SNFS difficulty range of 195-220 (with ECM to 2/9*[SNFS diff+30]), while jyb suggests 205-220. Personally I like a lower limit of SNFS difficulty of 200 for quartics with the same ECM requirement suggested by debroulx but that’s just an individual opinion. Quartics are such a strange beast! Octics are strange and inefficient, but sometimes necessary. Moving octics to 15e to avoid inefficiency at lower SNFS difficulty levels seems to work, but do we have enough data points to quantify? Reconciling degree 5 and 6 SNFS is a subject for another post. |
12+11,245 - c152:
[code] 32913735069923522070869387824816670850568285689991245293371 325481543596415495855962630160859078757398716237496419004674671180663909569530868809156229311 [/code] |
5+2,426 - c152:
[code] 77545065690259418551794755538578757271582748923001124226412994303689257072536841 933252850906480205624466779193439030761865955207552445999137887160802721 [/code]My scripts are now automatically updating factordb. |
11+7,288 - c153:
[code] 62810859081165879408050227175713686797900754129217 14496198764206105948214556620269595824785190594447578608707582028587996695968320038000452676843664060609 [/code]Found by ECM - B1=11e7.:smile: |
[QUOTE=EdH;526987]5+2,426 - c152:
[code] 77545065690259418551794755538578757271582748923001124226412994303689257072536841 933252850906480205624466779193439030761865955207552445999137887160802721 [/code]My scripts are now automatically updating factordb.[/QUOTE] Thanks. Do your scripts handle Aurifeuillian factors? |
| All times are UTC. The time now is 22:43. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.