mersenneforum.org Reservations
 Register FAQ Search Today's Posts Mark Forums Read

 2019-05-24, 21:36 #452 Dylan14     "Dylan" Mar 2017 3×173 Posts Reserving MMFactor=31,176e15,178e15.
 2019-05-29, 12:12 #453 Dylan14     "Dylan" Mar 2017 51910 Posts Reserving MMFactor=31,178e15,182e15.
 2019-06-30, 13:59 #454 Dylan14     "Dylan" Mar 2017 3×173 Posts Reserving MMFactor=89,110e14,115e14.
 2019-07-04, 12:23 #455 Dylan14     "Dylan" Mar 2017 3·173 Posts Reserving MMFactor=89,115e14,125e14.
 2019-07-12, 11:06 #456 Dylan14     "Dylan" Mar 2017 3·173 Posts Reserving MMFactor=89,125e14,135e14.
 2019-07-20, 00:03 #457 Dylan14     "Dylan" Mar 2017 3×173 Posts Reserving MMFactor=31,182e15,188e15.
 2019-08-29, 12:40 #458 Dylan14     "Dylan" Mar 2017 3×173 Posts Reserving MMFactor=31,188e15,200e15.
 2019-10-10, 06:57 #459 wombatman I moo ablest echo power!     May 2013 3×577 Posts Reserving MMFactor=31,200e15,205e15 for testing on Google Colab. So far it works, but it is somewhat slow on a Tesla K80 (CC 3.7). Still, it's a free resource and is pretty neat to try out.
2019-10-11, 10:35   #460
Fan Ming

Oct 2019

5F16 Posts

Hello everyone, I'm new here.
I would like to reserve MM127 k range 113500T to 114000T.
It's sad that the mmff can't work on my Nvidia GTX 1660 card in Windows(it has class problem), and I don't have linux installation(I've never used it). If this problem can be solved in future, it may take only about 3 days to complete one range like this.
And recently I've heard that Google colab can provide free powerful GPU computing instances, so I decided to have a try on this. Then it comes out a slight long story...
When I downloaded the mmff executable CUDA 8.0 for linux and uploaded to colab, it reported an error that can't find CUDA8.0 libcudart library. The CUDA10.0 program works well and the library can be successfully found. I knew nothing about linux so I don't know if its colab notebook has the CUDA8.0 library and where is it if yes. So I decided to compile a CUDA10.0 version mmff on their colab notebook. However as I described below, I knew nothing about linux, so I didn't know how to compile on linux. I just knew type four letter 'make' and enter by searching on Google later, and changed the CUDA path in makefile to that of their colab following the notes in makefile. However, then it reported "error 'compute_20' not supported", so I changed the NVCCFLAGS generate code part in makefile to "NVCCFLAGS += --generate-code arch=compute_37,code=sm_37 --generate-code arch=compute_60,code=sm_60 --generate-code arch=compute_75,code=sm_75"(I can't understand the code in makefile, this change was just a try-- I thought that it's because CUDA version that time, so I removed compute_20 parameters, and changed to these CC parameters because they are the CC of GPUs in colab. I don't know whether this change was necessary, perhaps this "error 'compute_20' not supported" was just because I forgot to turn GPU accelerate setting on that time, or not. But anyway when I made that change, this error disappeared, I turned the GPU setting on after this change). However, later an error called "undefined reference to __gxx_personality_v0' " occured at the link stage, I googled it and it said that the solution was to add "-libstdc++" to makefile. I don't know where to add since I don't understand the code in makefile. So I add this "-libstdc++" casually, tried many places and failed many times. Finally I appended it on "MMFFLIB = -lcudart -lm" line (I saw the "-libstdc++" string in makefile of mfaktc was appended on -lcudart -lm, so I tried this on that of mmff similarly), and this time this error disappered, the compilation & link succeeded without other errors! Then I run the compiled program and it seemed worked well - the CUDA runtime version was the correct version 10.0, no any runtime errors, and get assignment successfully. Attached files are the makefile of mmff that I used(I only made changes metioned as bold font above), the linux CUDA10.0 executable I compiled.
However I don't know whether the calculation result was also correct. I tried a few cases:
Code:
MMFactor=31,64,65
MMFactor=61,549e9,550e9
MMFactor=31,56e9,57e9
MMFactor=31,54e9,55e9
MMFactor=31,414.5e11,415e11
MMFactor=31,414e11,415e11
MMFactor=31,416e11,417e11
And the result was here:
Code:
no factor for MM31 in k range: 4294967298 to 8589934595 (65-bit factors) [mmff 0.28 mfaktc_barrett89_M31gs]
no factor for MM61 in k range: 549000000000 to 549755813887 (101-bit factors) [mmff 0.28 mfaktc_barrett108_M61gs]
no factor for MM61 in k range: 549755813888 to 550000000000 (102-bit factors) [mmff 0.28 mfaktc_barrett108_M61gs]
MM31 has a factor: 242557615644693265201 [TF:67:68*:mmff 0.28 mfaktc_barrett89_M31gs]
found 1 factor for MM31 in k range: 56G to 57G (68-bit factors) (partially tested) [mmff 0.28 mfaktc_barrett89_M31gs]
no factor for MM31 in k range: 54G to 55G (68-bit factors) [mmff 0.28 mfaktc_barrett89_M31gs]
no factor for MM31 in k range: 41450G to 41500G (78-bit factors) [mmff 0.28 mfaktc_barrett89_M31gs]
MM31 has a factor: 178021379228511215367151 [TF:77:78*:mmff 0.28 mfaktc_barrett89_M31gs]
found 1 factor for MM31 in k range: 41400G to 41500G (78-bit factors) (partially tested) [mmff 0.28 mfaktc_barrett89_M31gs]
no factor for MM31 in k range: 41600G to 41700G (78-bit factors) [mmff 0.28 mfaktc_barrett89_M31gs]`
It seems the result was correct - no factors missed, and no false positives. However more test may be needed, and I'm not familiar with the mechanism of mmff so has no idea how to do further test(Can anyone help me? thanks!). I didn't test fermat numbers, either.
I'm using this compiled executable on Google colab, if it can pass further test so confirmed reliable at calculation then it would be great, and this range work can be completed in a few months -- I got a Tesla K80 instance, and this instance can be used dozens of hours per week.
(BTW
1. I tried to e-mail Luigi about a week ago, but no reply. What went wrong... Was my e-mail treated as spam...?
2. Sorry for my poor English...)
Attached Files
 mmff-0.28_CUDA10_.zip (1.10 MB, 36 views)

 2019-10-11, 13:05 #461 Fan Ming   Oct 2019 1378 Posts I'm sorry for duplicate reply... When I post reply for the first time the success message showed for a very short time, before I could see that my reply was prechecking by monitor. I checked the FAQ, and it has no information about this. So I thought that my reply was lost and I posted it again... Until the second time I've seen that my reply need to be pre-checked by monitor. I'm sorry.
 2019-10-14, 21:34 #462 ET_ Banned     "Luigi" Aug 2002 Team Italia 25×149 Posts I had some issues to solve (my bad, sorry), but I finally came in contact with Fan Ming, and we had an exchange of opinions. Please feel free to enter the thread if you have something more to add.

 Similar Threads Thread Thread Starter Forum Replies Last Post kar_bon Riesel Prime Data Collecting (k*2^n-1) 129 2016-09-05 09:23 fivemack PrimeNet 3 2016-02-08 17:58 R.D. Silverman NFS@Home 15 2015-11-29 23:18 R.D. Silverman Cunningham Tables 15 2011-03-04 21:01 paulunderwood 3*2^n-1 Search 15 2008-06-08 03:29

All times are UTC. The time now is 18:56.

Tue Oct 27 18:56:58 UTC 2020 up 47 days, 16:07, 1 user, load averages: 2.16, 2.23, 2.17