![]() |
Incidentally, I'm somewhat annoyed with the quantity of these numbers that have showed up, especially since it looks like someone's just submitting 306 dd PRPs in sequence, which are almost at the point of being less trouble to generate than they are to download.
|
[QUOTE=pakaran;465606]Incidentally, I'm somewhat annoyed with the quantity of these numbers that have showed up, especially since it looks like someone's just submitting 306 dd PRPs in sequence, which are almost at the point of being less trouble to generate than they are to download.[/QUOTE]
I would wonder if the certificate needs the original index to be accepted by the db, though. I am tied up ATM, but am looking for a way to help without duplicating your effort. If I can find a way to d/l only a set well above the lower bounds where you are working, I will add some help. Unfortunately, the db's limit of 10k doesn't help in this. By the time I would grab 10k and try to work only the upper half, you would be finished with the lower 4096 and downloading the same ones I'm working. Incidentally, I'm just becoming able to assign several machines back to the certificate task once again. OTOH, since I can have a more timely direct control over my machines, would you rather I work the 306 dds and you take the others? For now, I'll search for a way to d/l well above the lower bounds and see what I come up with. Anyone else, feel free to add to this... |
[QUOTE=EdH;465617]I would wonder if the certificate needs the original index to be accepted by the db, though. I am tied up ATM, but am looking for a way to help without duplicating your effort. If I can find a way to d/l only a set well above the lower bounds where you are working, I will add some help. Unfortunately, the db's limit of 10k doesn't help in this. By the time I would grab 10k and try to work only the upper half, you would be finished with the lower 4096 and downloading the same ones I'm working. Incidentally, I'm just becoming able to assign several machines back to the certificate task once again.
OTOH, since I can have a more timely direct control over my machines, would you rather I work the 306 dds and you take the others? For now, I'll search for a way to d/l well above the lower bounds and see what I come up with. Anyone else, feel free to add to this...[/QUOTE] If you would like to take the 306's, and I take everything over (through however high I go before more appear), that's fine. I suppose you could use FactorDB's multiple directories option to get (e.g.) 3k numbers to distribute to each of 3 machines, if they were of comparable speed. As such, I'm uploading all the work I completed in the 306 dd numbers, and will start working upwards from 307. |
[QUOTE=pakaran;465618]If you would like to take the 306's, and I take everything over (through however high I go before more appear), that's fine. I suppose you could use FactorDB's multiple directories option to get (e.g.) 3k numbers to distribute to each of 3 machines, if they were of comparable speed.
As such, I'm uploading all the work I completed in the 306 dd numbers, and will start working upwards from 307.[/QUOTE] Sounds like a workable idea. I have started ten machines, for now, with 1000 each. They aren't identical, but until bedtime I'll keep up with them. I do need to modify a couple scripts to make the assignments and collections easier. Then again, maybe I'll just collect a little less often than my other setup. I suppose a bright side might be how much of an increase in proven primes there will be from this. But, unfortunately, I see no real reason someone did this other than it being a "fun" experiment. |
Alright, I just uploaded my work to date. I went by increasing input file size (a decent proxy for number size, and easy to sort by in Primo), and got as far as several 2k digit numbers. I'm now re-downloading and will restart from 307, picking up a several dozen new numbers that have appeared. Should get back into the area where I was pushing hard earlier (2200-2500) in a day or less, eyeballing the list.
I'm leaving for you the few numbers of 300-305 dd inclusive, currently six of them. I assume it'll be easy enough to do your next sweep in a way that catches them, but I'm happy to do them if you like. |
[QUOTE=pakaran;465703]...
I'm leaving for you the few numbers of 300-305 dd inclusive, currently six of them. I assume it'll be easy enough to do your next sweep in a way that catches them, but I'm happy to do them if you like.[/QUOTE] I have been limiting my runs to just 306, so you can grab anything else around them. Several of my fastest machines have gone on strike for some reason. I just got one of the better ones back on line, but have three others that are misbehaving. Sticking with ten for now, I have at least swapped a much faster machine for my slowest of the ten. It's still going to take some time to burn through all these 306 entries. This is where the 10000 limit is too restrictive. |
I know what you mean, and because Primo is GUI, you can't just set up a master-worker architecture where one machine does all the fetching (and maybe stores already-assigned numbers in a hash table or something, to discard future duplicates).
:( |
Maybe the 10k limit should be raised to 100k. EdH would then be able to download all the 306-digit entries at once, and you (pakaran) would be able to run a higher set of 100k certs (maybe 307 digits) without having to worry about work collisions.
|
One other possible workload split would be to do a full 10k download, than one person pull the "bottom" x% and the other the "top" x% based on filename (which is based on factordb id #); I used to do that when there wasn't much below the 3k dd limit: I would pull 1000 primes and do them in batches. that was I felt I was processing the primes that had been waiting the longest.
|
[QUOTE=pakaran;465773]I know what you mean, and because Primo is GUI, you can't just set up a master-worker architecture where one machine does all the fetching (and maybe stores already-assigned numbers in a hash table or something, to discard future duplicates).
:([/QUOTE]Actually, I have automated the primo runs using xdotool in a script on each machine. The sticking point is that I can't figure out the db call to d/l 10k, 306dd candidates with a 12 way split. [QUOTE=Stargate38;465780]Maybe the 10k limit should be raised to 100k. EdH would then be able to download all the 306-digit entries at once, and you (pakaran) would be able to run a higher set of 100k certs (maybe 307 digits) without having to worry about work collisions.[/QUOTE]I think if you looked a little closer you would find about 470k, 306dd entries and no 307s. With me running 306 and Pakaran running elsewhere, the only collisions currently occurring are probably from the random pulls. [QUOTE=schickel;465783]One other possible workload split would be to do a full 10k download, than one person pull the "bottom" x% and the other the "top" x% based on filename (which is based on factordb id #); I used to do that when there wasn't much below the 3k dd limit: I would pull 1000 primes and do them in batches. that was I felt I was processing the primes that had been waiting the longest.[/QUOTE]The splitting feature of the db would be my choice for an easier split and that's what I use to spread the candidates between my current machines. My machines are running 10k in less than an hour. I'm able to interface more frequently than Pakaran, so I can get more runs per day. But, I'm not available all the time so there are longer periods, including not being able to link with my machines while I was out today due to laptop application failure. |
[QUOTE=EdH;465804]... The sticking point is that I can't figure out the db call to d/l 10k, 306dd candidates with a 12 way split...[/QUOTE]I think I have figured this one out. I might be able to totally automate this...
[code] http://www.factordb.com/primobatch.php?digits=306&files=10000&parts=12&start=Generate%20zip [/code] |
| All times are UTC. The time now is 06:20. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.