![]() |
[QUOTE=James Heinrich;504378]...I'm curious what [I]Neutron3529[/I] is doing to get 2MB worktodo... is he my mystery TF'er who reserves [URL="https://www.mersenne.ca/tf1G.php#assign_count_by_age"]a million exponents at a time[/URL] and then takes 2 weeks to submit the results?[/QUOTE]
The simple math says he's running nearly 3,000 per hour. He seems to have found a way around your limitations. He may be using a recursive batch process to do this. 1,000 iterations of 1,000 each, with a very short timeout between. I would not want to have to concatenate all of that. Even 100 would take some time. He would also have to rename each download. If he has any programming experience, and I suspect he does, he could automate this entire process to run unattended. |
[QUOTE=storm5510;504559]He seems to have found a way around your limitations.[/QUOTE]Whoever it is isn't doing anything "wrong", as long as they keep reporting back 25THz-days of work once a week or so I'm not complaining much, although I would still much prefer that whoever it is would reserve and submit only a day's worth at a time. But as long as they work's getting done...
|
[QUOTE=TheJudger;504352]Hi,
There is currently no doublechecking for TF work and thus you can't improve (reduce time) for that. :yucky: You want a checksum or whatever for all factor attemps in each class, right? Won't work for multiple reasons:[LIST][*]the list of factor candidates depends on sieve parameters (e.g. more sieving, less candidates)[*]even with same settings the list of candidates depends on hardware because we ignore memory conflicts and sometimes a composite factor isn't cleared by the sieve while on the next run it is.[/LIST] Is this a common usecase? Keep in mind that I focus on current primenet wavefront. 2M worktodo isn't general usage. Maybe easiest is to split your worktodo to reasonable sizes and put a small script around mfaktc (put small worktodo.txt into directory, start mfaktc (let it run until it has finished worktodo.txt, repeat with next worktodo.txt)[/QUOTE] I finally come up with a possible idea: Firstly, it is easy to maintain a "smallest candidate", for example, about 100 per class. for each candidate, we could use BPSW algorithm to verify if it is a psedo-prime. if all the candidate is not pseudo-prime, we could use a special mark, and we could do a checksum with a pseudo-prime. It is quite hard to discard a pseudo-prime, and since there are at least 90log2/2~ 1/31 the probability a odd number n could be a prime if n less than 2^90, so the probability that all the candidate in a class is not a prime should be very low (<0.04) Hence a residual check value could be available. |
[QUOTE=TheJudger]There is currently no double-checking for TF work and thus you can't improve (reduce time) for that. :yucky:
You want a checksum or whatever for all factor attempts in each class, right? Won't work for multiple reasons:[/QUOTE] There is more than enough DC work as it is without checking TF. Except for the wave-front, there is far too much TF going on. By wave-front, I mean [I]GPUto72[/I]. |
[QUOTE=storm5510;504670]There is more than enough DC work as it is without checking TF. Except for the wave-front, there is far too much TF going on. By wave-front, I mean [I]GPUto72[/I].[/QUOTE]
TF _is_ checked, in the sense of detecting false positive factors, when submitted. The payoff on detecting missed factors (false negatives) is very low. |
[QUOTE=kriesel;504693]TF _is_ checked, in the sense of detecting false positive factors, when submitted. The payoff on detecting missed factors (false negatives) is very low.[/QUOTE]
If I may please share... Early on in the GPU72 effort I would sometimes notice people who's results were "unusual" (read: an unexpectedly low "success" rate). I spent a lot of time and money rechecking their work, and never once did I find a "cheat". And at the end of the day, it doesn't really matter all that much if a factor is missed. Finding a factor simply removes the candidate from the LL'ing and then DC'ing effort. The latter are definitive as to primality. |
I would remind the gentle readers that we do have a TF DC system in place. It is called the user [URL="https://www.mersenneforum.org/showthread.php?t=19014"]TJAOI [/URL]
|
[QUOTE=ixfd64;502467]I'm running mfaktc on a borrowed MSI gaming laptop with a GeForce GTX 1070 video card. There was a moment today when mfaktc got stuck on a class. However, this wasn't a complete freeze because mfaktc processed the next set of classes when I pressed Ctrl + C. I had to press Ctrl + C a few more times (with classes being processed each time) before mfaktc correctly exited. Has anyone encountered this issue?
It's worth mentioning that the cursor on this laptop sometimes freezes for a short time. I have no idea if these issues are related.[/QUOTE] I just noticed that this issue occurs when I select text inside the mfaktc window. The program resumes as soon as I cancel the selection. This is 100% reproducible as far as I could tell. The screen freezing issue likely isn't related to mfaktc as it did go away after a Windows update. |
[QUOTE=ixfd64;504712]I just noticed that this issue occurs when I select text inside the mfaktc window. The program resumes as soon as I cancel the selection. This is 100% reproducible as far as I could tell.[/QUOTE]That's a Windows thing, nothing to do with mfaktc.
Any program running in a command window will be suspended while you're marking/selecting text. You can also suspend a program with the Pause/Break key, and resume by hitting any (other?) key. |
[QUOTE=James Heinrich;504378]No, that's not a common usecase. Even with my work at the large-exponent-low-bits above 1000M where exponent (not class) runtimes are ~1s there's no reason to have mammoth worktodo. For convenience I fetch/submit 1000 exponents at a time (~25kB worktodo) but even if it was an offline system I would seriously consider writing some script that would slice off 100-1000 assignments at a time from a separate bulk assignment file when worktodo.txt runs empty (and at the same time archive off results.txt since that also gets large quickly).
I'm curious what [i]Neutron3529[/i] is doing to get 2MB worktodo... is he my mystery TF'er who reserves [url=https://www.mersenne.ca/tf1G.php#assign_count_by_age]a million exponents at a time[/url] and then takes 2 weeks to submit the results?[/QUOTE] I use [QUOTE][url]https://www.mersenne.org/report_factoring_effort/[/url][/QUOTE] to get a worktodo.txt file So it is quite easy to get a 0KB worktodo.txt or a ~2.7M worktodo.txt |
[QUOTE=GP2;504422]Even if you use a RAM drive?[/QUOTE]
I use Imdisk to create a RAM drive. The question is, when rewrite worktodo.txt, every byte in worktodo.txt must be changed, causing a lot of waste. I think the reason why so slow is that I keep the priime95 running, which may slow down the rewrite the worktodo file |
| All times are UTC. The time now is 23:03. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.