![]() |
[QUOTE=kriesel;545756]I compute the odds of 6 factors per bit level and exponent to be rather small[/QUOTE]While there are quite a few examples of two factors in the same bitlevel+class, I have found zero examples of even 3 with matching class/bitlevel, so the chance of 10 is not, I think, something we need to worry much about.
|
When you find 10 with mfaktc tell me and I will search the rest of the class "by q". :razz:
Just to make sure none escaped :w00t: |
[QUOTE=James Heinrich;545759]While there are quite a few examples of two factors in the same bitlevel+class, I have found zero examples of even 3 with matching class/bitlevel, so the chance of 10 is not, I think, something we need to worry much about.[/QUOTE]In a spreadsheet I estimated odds for n per bitlevel/class/exponent, for n from 0 to 6, without regard to whether a fewer number might be found at a lower bitlevel or class, and concluded more than 5 is improbable, based on 40 bits to 92 bits sums. [URL]https://www.mersenneforum.org/showpost.php?p=520982&postcount=5[/URL]
There are a few things that could make the server database content lower: 1. I messed up the calculations and overestimated the probabilities. A distinct possibility. 2. Early version software only coped with a small number of factors/bitlevel/class/exponent, such as 2 or 1. I think that was the case, around mfaktc v0.04 or v0.05, per [URL]https://mersenneforum.org/showpost.php?p=205332&postcount=131[/URL], but I don't know when it began to handle more. Possibly as soon as V0.05 mfaktc; [URL]https://mersenneforum.org/showpost.php?p=206448&postcount=151[/URL] Not sure about mfakto or prime95/mprime or whatever else was used before gpu factoring. User TJAOI probably does not change the picture much. 3. Factoring stops after the current bit level or class, after a factor is found for an exponent, so does not find coincident factors that occur in a higher class or bitlevel for the same exponent. This effect would be minor. 4. Whatever I'm not thinking of now. |
[QUOTE=kriesel;545890]In a spreadsheet I estimated odds for n per bitlevel/class/exponent, for n from 0 to 6, without regard to whether a fewer number might be found at a lower bitlevel or class, and concluded more than 5 is improbable, based on 40 bits to 92 bits sums. [URL]https://www.mersenneforum.org/showpost.php?p=520982&postcount=5[/URL]
3. [U]Factoring stops after the current bit level or class[/U], after a factor is found for an exponent, so does not find coincident factors that occur in a higher class or bitlevel for the same exponent. This effect would be minor. 4. Whatever I'm not thinking of now.[/QUOTE] Stopping or continuing is controlled in mfaktc.ini. # possible values for StopAfterFactor: # 0: Do not stop the current assignment after a factor was found. # 1: When a factor was found for the current assignment stop after the # current bitlevel. This makes only sense when Stages is enabled. # 2: When a factor was found for the current assignment stop after the # current class. # # Default: StopAfterFactor=1 StopAfterFactor=1 |
[QUOTE=kladner;545891]Stopping or continuing is controlled in mfaktc.ini.
# possible values for StopAfterFactor: # 0: Do not stop the current assignment after a factor was found. # 1: When a factor was found for the current assignment stop after the # current bitlevel. This makes only sense when Stages is enabled. # 2: When a factor was found for the current assignment stop after the # current class. # # Default: StopAfterFactor=1 StopAfterFactor=1[/QUOTE]Exactly my point. Also applies to mfakto. I have a vague recollection of something similar in prime95 but can not confirm that in the documentation. (And note that TF credit in the case of a factor found is given as if StopAfterFactor=2, even if what was run is StopAfterFactor=1 or 0.) |
Just got a notebook with a 2060 in it. What version should I grab and from where to get going again?
|
[QUOTE=tului;547865]Just got a notebook with a 2060 in it. What version should I grab and from where to get going again?[/QUOTE]RTX2060? Probably cuda10 capable v0.21 from [url]https://download.mersenne.ca/mfaktc/mfaktc-0.21[/url]
|
Programmers, please consider modifying mfaktc to allow larger maximum GpuSieveSize than the current maximum 2047 (megabits), and sharing the enhancement.
Numerous gpu models show increased throughput going from GpuSieveSize 1024 to 2047. Even as old and slow as GTX1060. The increase for faster models such as RTX 2080 Super is significant. It appears there is a little more to be gained if the code is modified again to support yet larger values. RTX2080x produce more throughput with multiiple instances, indicating there's more yet to be gained with larger GpuSieveSize. Charts of performance of a single instance versus GpuSieveSize settings show a positive slope near the current maximum, also. Future faster gpus will likely continue the trend of the large impact of larger GpuSieveSize on faster gpus, that we have seen in the gpu models released in the past few years. (RTX3080 is coming...) 2048 x 2^20 =2^31 bit position computation requires computation with unsigned 32 bit integer or larger, but the actual code is signed 32 bit integer, maximum 2^31-1. The RTX2080x would benefit from more GpuSieveSize than the program currently supports. Unsigned 32 bit would be good at 4095 max. But there's code that uses a negative value in the same variable, that would need to be changed. Jumping to 64-bit signed is a possibility but I wonder how much that would impact overall performance. There's no need to support many more bits of GpuSieveSize than are present in gpu vram, typically 4GB = 32Gbits =32768Mbits to 16GB = 128Gbits = 131072Mbits these days. Allow another factor of four to cover the next several years. There may be other smaller limits, such as what the gpu model's OpenCl support is capable of. (I've already seen that on old gpu models at ~1023 or 511 depending on model or perhaps driver version.) For some background, see [URL]https://mersenneforum.org/showpost.php?p=525731&postcount=3202[/URL] and related posts. |
CUDA 11?
Any builds that support the newest CUDA 11? Hopefully there's something about CUDA 11 that could potentially bring some sort of speed up?
|
[QUOTE=xx005fs;549372]Any builds that support the newest CUDA 11? Hopefully there's something about CUDA 11 that could potentially bring some sort of speed up?[/QUOTE]It's probably prep for RTX30xx. For a given gpu, later CUDA versions may be SLOWER. [url]https://www.tomsguide.com/news/rtx-3080[/url]
|
[QUOTE=kriesel;549373]It's probably prep for RTX30xx. For a given gpu, later CUDA versions may be SLOWER. [URL]https://www.tomsguide.com/news/rtx-3080[/URL][/QUOTE]
I am testing the current 2047 version on my GTX 1080. It seems to run slightly faster. Around 1,090 GHz-d/day. The previous one I had been using ran about 1,045. I suppose there are other advantages. Either way, it started without any problems. |
| All times are UTC. The time now is 22:30. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.