![]() |
New PRPnet drive discussion
[QUOTE=Lennart;265838]85287*2^1890011+1 is Prime
:fusion:[/QUOTE] Nice! :tu: It's about time we had another one on port 1300. And this prime is the project's 3rd largest, to boot! (It would be the second largest except for Ian's recent S12 proof yesterday.) [B]Admin edit: Started new thread related to a new PRPnet drive discussion. See the sieving status [URL="http://www.mersenneforum.org/showpost.php?p=266860&postcount=60"]here[/URL].[/B] Meanwhile, we're nearing n=2M on this search, which is the end of what we currently have loaded into the server. Personally, I would like to see the current effort continue past 2M--we've been hauling in a good catch of big primes so far, and particularly as NPLB's primary efforts pass n=1M, plain old megabits grow increasingly less remarkable, so it's nice to have an opportunity to find bigger primes in an automated team-drive setting. We would be picking up five additional k's at 2M: 23451 and 60849 for Sierp. even-n, 9267 and 32247 for Sierp. odd-n, and 39687 for Riesel odd-n. These k's are all already at n=2M and currently unreserved. |
mdettweiler,
I disagree with continuing on the this base. For the following reasons/questions: 1) There are numerous prpnet servers already testing base 2. 2) I have seen a want from rogue,vmod, and others for a prpnet server for the 1k's. 3) What were your thoughts on the next range, n=2M-3M? 4) How many primes do you expect for the next range, even with the additional 5k's? 5) How many candidates do you expect for the next range? a) Is it on par with the last set ~65K? b) Is there a sieve file ready to go? 6) The heaviest weights are currently still being tested. Are the other 5 k's heavier? 7) Tests are getting longer so please look at how many GHz years it would take for the next range. I maybe misunderstanding the intent of port 1300. CRUS mods please clarify, what is the intention for port 1300? |
[QUOTE=Mathew;265875]mdettweiler,
I disagree with continuing on the this base. For the following reasons/questions: 1) There are numerous prpnet servers already testing base 2. 2) I have seen a want from rogue,vmod, and others for a prpnet server for the 1k's. 3) What were your thoughts on the next range, n=2M-3M? 4) How many primes do you expect for the next range, even with the additional 5k's? 5) How many candidates do you expect for the next range? a) Is it on par with the last set ~65K? b) Is there a sieve file ready to go? 6) The heaviest weights are currently still being tested. Are the other 5 k's heavier? 7) Tests are getting longer so please look at how many GHz years it would take for the next range. I maybe misunderstanding the intent of port 1300. CRUS mods please clarify, what is the intention for port 1300?[/QUOTE] Hmm, I see what you mean. The idea we had in mind for port 1300 was to do longish-term efforts on bases with a handful of k's remaining; other efforts, such as the 1k drive, would get their own servers. (I was thinking 1100 for the 1k drive, to capitalize on the mnemonic connection of the second digit "1" with 1-k conjectures.) That said, there may well be other, better directions for port 1300 within its particular area of focus. I mainly threw out the idea of continuing the current drive because it seemed somewhat popular and had been a little neglected prior to the current stint in port 1300; I will admit that I did not do much of any hard number crunching as to exactly what it would involve. :smile: To answer your questions (keeping in mind that I haven't run any hard numbers on a lot of these): 1) Indeed, good point. My thought in this respect was that it still wasn't redundant since (as far as I know) there are no other public servers doing work in the 2M-3M range; base 2 efforts have sometimes been considered preferential at CRUS due to speed advantages, though I suppose that isn't really a big deal now with the latest PFGW and LLR. 2) We definitely have not forgotten about the 1-k drive! :smile: It has unfortunately gotten pushed to the side a little with me being pressed for time during the last number of months (since I did technically take responsibility for the development of this effort upon my shoulders at one point back when we were discussing it), but it's definitely planned to happen eventually. The plan is that it will happen alongside whatever we're doing in port 1300. 3) I'm guessing that 2M-3M would take us quite a while to do, in any event; even if Ian and Lennart chipped in for parts of the job like they did for n<2M, it might take us well into next year. My thought was that it would be a long-term project that we could slowly chip away at until an eventual proof. 4) As I mentioned, I haven't run any numbers on this, but I would guess at least one or two; the primes will of course grow scarcer as n increases, but given the number of k's, weights, etc., my gut estimate would be one or two in the whole range. 5) Jean's sieve file that we're using for the current range goes up to n=16M and is sieved to p=50T. Thinking about this just now, 50T does sound rather low for n>2M, so we probably would need more sieving. I hadn't considered this initially; this would be a major point against continuing the effort right away. 6) I'm not sure of the weights, and I don't have the respective sieve files on hand broken down by k, so I would need to do some fiddling to figure this out. Now that I think about the idea in more depth, I'm beginning to agree with you that we should hold off on continuing; the files will need to be sieved much deeper before we can start LLRing n>2M. |
I like the idea of adding the additional k's for these base 2 conjectures and continuing with n>2M. They are very popular, far more popular than previous efforts on this drive, and several of these conjectures have a decent chance of proof over the next few years, which corresponds well with the intent of the drive...that is to test bases <= 32 with just a few k's remaining. Previous efforts have frequently only had 1 or at most 2 people working on them. But as Max said, we are definitely not sieved far enough for n>2M and as Mathew said, there are many base 2 efforts out there on many different projects.
Therefore let's take suggestions on what we could do next on this drive. Whatever we choose, we may (likely) need to stop this drive and either begin or continue sieving the chosen effort before starting the drive again. Please post your suggestions here keeping in mind that the intent of the drive is to test bases <= 32 with just a few k's remaining, preferrably bases that have already been tested to n>=100K. Alternatively we could also open it up to bases <= 100 but if we do that, I still think it would be best to stick to bases with few k's remaining. |
[QUOTE=gd_barnes;265927]Please post your suggestions here keeping in mind that the intent of the drive is to test bases <= 32 with just a few k's remaining, preferrably bases that have already been tested to n>=100K. Alternatively we could also open it up to bases <= 100 but if we do that, I still think it would be best to stick to bases with few k's remaining.[/QUOTE]
Since port 1300 has been primarily base 2 that it should stay that way. IMO, I think it should have all remaining k for all conjectures with bases that are powers of 2 minus those that are reserved by outside projects, e.g. SoB and TRP. |
[QUOTE=rogue;265942]Since port 1300 has been primarily base 2 that it should stay that way. IMO, I think it should have all remaining k for all conjectures with bases that are powers of 2 minus those that are reserved by outside projects, e.g. SoB and TRP.[/QUOTE]
The current effort is the first time that we've used port 1300 for base 2. We've mainly used it for bases in the 20s & 30s with few k's remaining. Doing all powers of 2 would mean that we would have somewhere between 60 and 80 k's loaded in the server (SWAG estimate without looking). It would also require a large amount of upfront sieving. Lennart, Serge, Ian, and others, do you have any thoughts on the matter? |
[QUOTE]Lennart, Serge, Ian, and others, do you have any thoughts on the matter? [/QUOTE]I'd have 2 servers. The current one for the base 2 stuff and load it with whatever base 2 stuff is out there that is sieved high enough. You can then sieve while that is running.
The second server should have whatever we have sieve files for and let it run as a first come, first processed scenario. It doesn't have to be just 1kers, but whatever is sieved high enough (leave the base 6 and 16 stuff the way they are, manual reservations). I would limit it to any base with 5 or less k's remaining. Again, while those are running, sieve other stuff. I have a bunch of 1kers sieved to 500B for testing to n=200K, but that may not be optimal for others. It is for me. I sieve to lower levels because I usually have 10 to 15 cores PRPing that stuff. I would be willing to put those 10 to 15 cores on this second server. I currently have 16 cores on the disabled list. |
[QUOTE=gd_barnes;265968]Doing all powers of 2 would mean that we would have somewhere between 60 and 80 k's loaded in the server (SWAG estimate without looking). It would also require a large amount of upfront sieving.[/QUOTE]
I hadn't looked, but you are correct, there are quite a few of them. I would be interested in helping with some of the sieving. Fortunately all of the power of 2 bases could be done with two sieve files after normalizing to base 2. |
MyDogBuster's second server idea, is the one I would like to participate in.
It falls in line with what I thought port 1300 was for. |
I am working on k=9519 which is closing in on n=3M (can't tell for sure as I am away from my boxes for the nest 3 weeks and they're not running.)
As I can put much more power on it manually, I'd like to keep it that way. If everybody agrees to put up a server for all base 2 stuff, I'll release it of course and find something else for those cores. |
I say start 1 or 2 more prpnet servers.
Add all bases with 1k left on one. the second can be use for other bases. Upgrade prpnet to 4.3.5 There are much more user using prpnet and there are no problem to switch to another server, and that I think will give much more work done. If you will countinue base 2 on 1300 we need i big sieving race on on all k's and make them to at least 5M Lennart |
Gary and Max, have you decided the direction you want to go with this?
|
Both of us are out of town right now and will be back early to mid next week. Based on previous suggestions, I think it makes sense to have 2 servers, one for base 2 like we are doing here and one for bases (preferrably < 100) with few (perhaps <= 5) k's remaining.
For the 1st option, I think it makes sense to continue with out current k's to n=2M and then add the additional k's that are already at n=2M once we get there as Max initially suggested. For the 2nd option, we can take suggestions on what bases people would like to do. Both options will need sieving done before starting/continuing so 2 different sieving drives will need to be set up. |
[QUOTE=gd_barnes;266402]Both of us are out of town right now and will be back early to mid next week. Based on previous suggestions, I think it makes sense to have 2 servers, one for base 2 like we are doing here and one for bases (preferrably < 100) with few (perhaps <= 5) k's remaining.
For the 1st option, I think it makes sense to continue with out current k's to n=2M and then add the additional k's that are already at n=2M once we get there as Max initially suggested. For the 2nd option, we can take suggestions on what bases people would like to do. Both options will need sieving done before starting/continuing so 2 different sieving drives will need to be set up.[/QUOTE] Is there any interest in putting base 16 into a server rather than continuing with the manual reservation/submission process? Is there any benefit in sieving k's from other power of 2 bases with base 2 k's? That is presuming that some of the base 2 k's haven't been sieved past n=2M. Would the intention be to use sr2sieve or tpsieve for base 2 sieving? You might draw some interest if GPUs could be used to help the sieving process. |
[QUOTE=rogue;266406]Is there any interest in putting base 16 into a server rather than continuing with the manual reservation/submission process?
Is there any benefit in sieving k's from other power of 2 bases with base 2 k's? That is presuming that some of the base 2 k's haven't been sieved past n=2M. Would the intention be to use sr2sieve or tpsieve for base 2 sieving? You might draw some interest if GPUs could be used to help the sieving process.[/QUOTE] We've had decent response on the manual drives for bases 6 and 16 so I don't see a reason to change that right now. Also, sieving and putting different powers-of-2 bases in a server is too difficult due to their very different search depths and # of k's remaining. Base 16 is already optimally sieved for n=250K-500K, which is the equivalent of n=1M-2M base 2. R256 is hardly sieved at all and is at n=75K or n=600K base 2. To the best of my knowledge, tpsieve is only good for contiguous k's. I believe it loses its benefit when there are wide gaps in the k's. So we would use sr2sieve. |
[QUOTE=gd_barnes;266408]We've had decent response on the manual drives for bases 6 and 16 so I don't see a reason to change that right now. Also, sieving and putting different powers-of-2 bases in a server is too difficult due to their very different search depths and # of k's remaining. Base 16 is already optimally sieved for n=250K-500K, which is the equivalent of n=1M-2M base 2. R256 is hardly sieved at all and is at n=75K or n=600K base 2.
To the best of my knowledge, tpsieve is only good for contiguous k's. I believe it loses its benefit when there are wide gaps in the k's. So we would use sr2sieve.[/QUOTE] Presuming that PRPNet would be used, the server could order by decimal length, which works well when mixing numerous bases. In any case the important thing is to get the sieving sub-projects you've suggested started. I wasn't aware of that limitation with tpsieve, but presuming you are correct then that would be a problem. |
We are now taking suggestions on which bases to include in a new 2nd PRPnet server. Preferred are bases < 100 with <= ~5 k's remaining. It's probably best to stick with bases at a search depth of n<200K to start with.
I have not done a detailed analysis on whether this scope would be too narrow or broad. We can decide that as suggestions come in. We could tweak either the bases, k's remaining, or search depth to include a little more or less as necessary. |
[QUOTE=gd_barnes;266411]We are now taking suggestions on which bases to include in a new 2nd PRPnet server. Preferred are bases < 100 with <= ~5 k's remaining. It's probably best to stick with bases at a search depth of n<200K to start with.[/QUOTE]
I was hoping that you would consider conjectures with a search depth >= 200K. My reasoning is that people tend to reserve lower n ranges then abort conjectures when n gets too large because individual tests take too long. Would it make more sense to set up the second server for any conjectures (excluding those being handled by another project/drive) where n >= 200K? I'm fine if you disagree. I'm just asking the question. Presuming you don't change your mind I have no opinions regarding the conjectures to put into a second server. |
Gary,
Here is a quick snap of what I understand your criteria is: R n≤200K k≤5 b≤100 61 -4 k's 67 -5 k's 70 -3 k's S n≤200K k≤5 b≤100 70 -5 k's 75 -2 k's 100 -5 k's ----------------------------------------- Other things to look at: b≤32[SUP]2[/SUP] R 10[SUP]2[/SUP] - n=200K - 1k 28[SUP]2[/SUP] - n=100K - 2k's 30[SUP]2[/SUP] - n=100K - 1k S 10[SUP]2[/SUP] - n=100K - 5k's 26[SUP]2[/SUP] - n=150K - 1k 28[SUP]2[/SUP] - n=100K - 1k There are 6 Riesel (159,163,173,177,181,182) & 5 Sierpinski (118,183,185,187,189) 1k's with b≤200 n≤200K all have sieve files to at least 1.5T MyDogBuster stated he usually sieves to only 500G. B133 has 5k's n≤100 and a sieve file to 10T |
Nice list Mathew, I say, throw it all that stuff in the new server. That will give us plenty of time to do more sieving. I'll sieve to whatever limit is agreed upon.
I would also include all bases up to b=200 that have 2, 3, 4 or k's left and n<=200K. That would match the upper boundary of the 1ker's you listed. I think a limit of b<=100 is a bit narrow. I have about 2 weeks remaining on the ck<10K bases yet to be started. I have them reserved and sieved, so it's just test time left. That will free up about 12 cores for this effort. |
[QUOTE=rogue;266413]I was hoping that you would consider conjectures with a search depth >= 200K. My reasoning is that people tend to reserve lower n ranges then abort conjectures when n gets too large because individual tests take too long. Would it make more sense to set up the second server for any conjectures (excluding those being handled by another project/drive) where n >= 200K?
I'm fine if you disagree. I'm just asking the question. Presuming you don't change your mind I have no opinions regarding the conjectures to put into a second server.[/QUOTE] The reason that I suggested excluding conjectures where n>200K is that there are very few of those, most of which are bases < 32, and if we have the server hand out tests by size, they will be reserved for a very long time before we ever do any testing on them if we start the server from n=50K or 100K. For instance, I wouldn't want to tie up bases like R22, R23, R26, R27, R31, S22, S26, S30, etc. waiting for higher bases like what Mathew suggested to get to n=250K or 300K or higher. Such efforts are too disparate to include with higher bases only searched to n=50K or 100K. Tell you what, let's see what we all decide with likely a subset of what Mathew suggested and run something like that up to about n=200K while still allowing people to manually reserve the smaller bases that are already searched to n>200K. Then we can potentially look at the smaller bases for a 2nd effort on this 2nd server, perhaps bringing those with <= 5 k's remaining up to n=500K-1M. What this will do is give the heavy hitters some big tests in base 2 with our current server and the rest of us some intermediate sized tests in the new server with a reasonable chance at a few proofs. Gary |
[QUOTE=gd_barnes;266453]The reason that I suggested excluding conjectures where n>200K is that there are very few of those, most of which are bases < 32, and if we have the server hand out tests by size, they will be reserved for a very long time before we ever do any testing on them if we start the server from n=50K or 100K. For instance, I wouldn't want to tie up bases like R22, R23, R26, R27, R31, S22, S26, S30, etc. waiting for higher bases like what Mathew suggested to get to n=250K or 300K or higher. Such efforts are too disparate to include with higher bases only searched to n=50K or 100K.
Tell you what, let's see what we all decide with likely a subset of what Mathew suggested and run something like that up to about n=200K while still allowing people to manually reserve the smaller bases that are already searched to n>200K. Then we can potentially look at the smaller bases for a 2nd effort on this 2nd server, perhaps bringing those with <= 5 k's remaining up to n=500K-1M. What this will do is give the heavy hitters some big tests in base 2 with our current server and the rest of us some intermediate sized tests in the new server with a reasonable chance at a few proofs. Gary[/QUOTE] // sortoption= tells the server how to hand out candidates for testing. This // is a comma delimited list of sort criteria. These are the available choices // for the list (which is case-insenstive): // a - age, older candidates have higher priority // l - length, short candidates have higher priority // k - k, lower k have higher priority // b - b, lower b have higher priority // n - n, lower n have higher priority // c - c, lower c have higher priority // // When comparing to the previous version: // L is equivalent to l,a // A is equivalent to a,l // K is equivalent to b,k,n,c // N is equivalent to b,n,k,c sortoption=l,a ------------- ------------- // onekperclient= only applies to Sierpinski/Riesel type servers // By setting this to 1, it will ensure that each client will work on // a k/b/c and that other candidates for the same k/b/c will not be given // to another client. Setting this to 1 will also set the sortoption // to k,n,b,c which cannot be overridden. onekperclient=0 Lennart |
[QUOTE=gd_barnes;266453]The reason that I suggested excluding conjectures where n>200K is that there are very few of those, most of which are bases < 32, and if we have the server hand out tests by size, they will be reserved for a very long time before we ever do any testing on them if we start the server from n=50K or 100K. For instance, I wouldn't want to tie up bases like R22, R23, R26, R27, R31, S22, S26, S30, etc. waiting for higher bases like what Mathew suggested to get to n=250K or 300K or higher. Such efforts are too disparate to include with higher bases only searched to n=50K or 100K.
Tell you what, let's see what we all decide with likely a subset of what Mathew suggested and run something like that up to about n=200K while still allowing people to manually reserve the smaller bases that are already searched to n>200K. Then we can potentially look at the smaller bases for a 2nd effort on this 2nd server, perhaps bringing those with <= 5 k's remaining up to n=500K-1M. What this will do is give the heavy hitters some big tests in base 2 with our current server and the rest of us some intermediate sized tests in the new server with a reasonable chance at a few proofs.[/QUOTE] Sounds good to me. It shouldn't take too long to get those bases to n=200000. As Lennart mentions, the server can hand tests out by decimal length (option l), which is good if you mix bases that are powers of one another, i.e. 2, 4, 16, etc. The server can also hand out tests by n, which (IMO) works would be a more typical usage of the server when handling multiple bases. onekperclient is only helpful if there are many more k's than clients. Under other circumstances, clients wouldn't get work. If someone were to start a public server for a conjecture with a large number of remaining k (R19 is a good example), then that option could probably be used. |
[QUOTE=rogue;266491]Sounds good to me. It shouldn't take too long to get those bases to n=200000.
As Lennart mentions, the server can hand tests out by decimal length (option l), which is good if you mix bases that are powers of one another, i.e. 2, 4, 16, etc. The server can also hand out tests by n, which (IMO) works would be a more typical usage of the server when handling multiple bases. onekperclient is only helpful if there are many more k's than clients. Under other circumstances, clients wouldn't get work. If someone were to start a public server for a conjecture with a large number of remaining k (R19 is a good example), then that option could probably be used.[/QUOTE] To make the initial effort not "too small", we can sieve n=100K-500K and test n=100K-250K leaving the n=250K-500K portion of the sieve files for future use. That should still take quite a while because all of the bases are > 32 and there should be quite a few top-5000 primes in there. After that, we can look at testing some bases < 32 with 3-5 k's remaining for n=250K-500K and perhaps others with only 1-2 k's remaining for n=~400K/500K-1M. I'll be out of town until Weds. and am very busy on weekends so will not be able to respond much more or start anything. Next Thurs. and Fri., we can finallize the bases and perhaps begin a sieving drive shortly after that. Gary |
FYI: I'm back from my trip as of about an hour ago. :smile:
[QUOTE=gd_barnes;266408]To the best of my knowledge, tpsieve is only good for contiguous k's. I believe it loses its benefit when there are wide gaps in the k's. So we would use sr2sieve.[/QUOTE] Yes, this is correct. tpsieve/ppsieve is of little use for conjecture searches unless some of their k's can be converted to base 2 k's within the range of another large sieve such as PrimeGrid's (which goes up to k=10,000, n=6M; I'm not sure any of our power-of-2 k's fit within this). As for what to load into the servers (port 1300 and one or more new servers), I don't have too much of a preference myself. I do like the idea of having two separate servers, one for base 2 as we're currently doing on port 1300, and one for assorted other bases with a small handful of k's remaining. As a suggestion, we may want to instead do the base 2 stuff in the new server, and use port 1300 for the assorted stuff; we could put the new server on port 1200 and thus capitalize on the mnemonic connection of base 2/port 1200 (like what I was thinking with port 1100 for the future 1-k drive). Since we are not likely to have any base 3 PRPnet efforts in the foreseeable future, 1300 would be a good choice for assorted bases. :smile: I could also set up a base 16 server on port 1600, as has been suggested; that said, I agree with Gary that it may be better to stick to manual reservations on that one. With port 1300, an additional server coming soon, and port 1100 for the future 1-k drive, we'll have three PRPnet servers running, which will in and of itself be spreading the project's PRPnet resources pretty thin. I'm not sure we'd be able to attract enough interest to maintain any more servers than that. I've noticed that there seems to be two pools of resources in the project, one for manual work and one for PRPnet; there is some overlap between them but it is not total. We thus need to be a little careful with moving already-successful manual efforts from manual to PRPnet, so as to not move them out of reach of the manual pool and instead into the PRPnet pool which will have quite plenty of work between the aforementioned three servers. Max :smile: |
I will continue sieves (S183,185,187,189) to 2T, and start R61 n=100K-500K.
|
[QUOTE=mdettweiler;266533]As for what to load into the servers (port 1300 and one or more new servers), I don't have too much of a preference myself. I do like the idea of having two separate servers, one for base 2 as we're currently doing on port 1300, and one for assorted other bases with a small handful of k's remaining. As a suggestion, we may want to instead do the base 2 stuff in the new server, and use port 1300 for the assorted stuff; we could put the new server on port 1200 and thus capitalize on the mnemonic connection of base 2/port 1200 (like what I was thinking with port 1100 for the future 1-k drive). Since we are not likely to have any base 3 PRPnet efforts in the foreseeable future, 1300 would be a good choice for assorted bases. :smile:[/QUOTE]
I think that having 3 servers would be overkill. If we were to do a 1k only PRPnet server, it would need to be after we finish this initial effort that we are talking about for a 2nd server. But at this point, we're talking about taking a bunch of bases < 100 (or maybe < 200) with <= 5 k's remaining to n=200K or n=250K and following that up with taking perhaps bases <= 32 to n=500K to 1M somewhere. IMHO port 1300 should be left as is with base 2 stuff to keep all score and primes together for the base on the display page. The mnemonics mean little to nothing on the servers. You can choose whatever port you want for our new server. |
I'm now thinking that we may want to expand the effort for the new server to include bases < 200 searched as high as n=200K already and search everything to n=250K. That would add quite a few 1k bases, which I think would be interesting to people. Many would only end up being searched from n=200K to 250K but at least that would make things consistent. Taking bases with several k's remaining to n=250K while leaving 1kers at n=100K, 150K, or 200K would be inconsistent with and a disserve to our efforts to prove bases so increasing the search depth of those 1kers would be a good thing, even if it's only from n=150K or 200K to n=250K.
Mathew, can you prepare a new list of all bases with the following conditions: base <= 200 <= 5 k's remaining n<250K search depth (Note that I doubt there are any that are 200K<n<250K but if there are, we'd want to give them a small nudge up to n=250K for consistency. I know there are many that are exactly at n=200K.) The only disadvantage of this is that it may tie up some 1k bases for quite a while that are already searched to n=150K or 200K. If we get a ways into it and it seems as though we are "hoarding" bases without searching them for an extended timeframe, we can pull some out.) The above should be quite a few bases. We'll then use that list to pare down to a more reasonable number of bases. The idea will be to search them all to n=250K and then come back around with a 2nd effort to perhaps search bases <= 50 with <=5 k's remaining to n=500K or higher. Thanks! Gary |
[QUOTE=gd_barnes;266567]I think that having 3 servers would be overkill. If we were to do a 1k only PRPnet server, it would need to be after we finish this initial effort that we are talking about for a 2nd server. But at this point, we're talking about taking a bunch of bases < 100 (or maybe < 200) with <= 5 k's remaining to n=200K or n=250K and following that up with taking perhaps bases <= 32 to n=500K to 1M somewhere.
IMHO port 1300 should be left as is with base 2 stuff to keep all score and primes together for the base on the display page. The mnemonics mean little to nothing on the servers. You can choose whatever port you want for our new server.[/QUOTE] Okay, sounds good. Note that what I was thinking regarding switching to a new port # for base 2 stuff was that it would actually stay in the same server, but I'd just reconfigure it to run on a different port. That is, I would transplant port 1300 onto port 1200 and create a new, blank port 1300 in its place. But, I see your point...the mnemonic coincidences are really quite unimportant in the grand scheme of things. It's not like it's [I]that[/I] hard to keep track of the contents of two (or at max three) servers. :rolleyes: |
[CODE]Riesel Sierpinski
------ ---------- 61 (100K) 4k 37 (200K) 3k 67 (100K) 5k 43 (200K) 1k 70 (100K) 3k 55 (200K) 4k 80 (200K) 3k 68 (200K) 2k 93 (200K) 1k 73 (200K) 2k 94 (200K) 1k 75 (100K) 2k 100 (200K) 1k 86 (200K) 1k 103 (100K) 3k 100 (100K) 5k 109 (200K) 1k 102 (100K) 3k 112 (150K) 3k 107 (100K) 4k 133 (100K) 2k 112 (150K) 2k 152 (200K) 1k 122 (200K) 1k 158 (100K) 3k 133 (100K) 3k 160 (200K) 1k 135 (50K) 5k 162 (50K) 5k 140 (100K) 2k 163 (100K) 1k 155 (200K) 1k 172 (50K) 5k 157 (100K) 3k 177 (100K) 1k 173 (200K) 1k 181 (100K) 1k 174 (200K) 1k 182 (100K) 1k 183 (150K) 1k 191 (100K) 2k 185 (100K) 1k 200 (100K) 2k 187 (100K) 1k 189 (100K) 1k 191 (50K) 4k[/CODE]Note: I ignored ones that are reserved |
Wow, well, that's a lot of bases. Do I hear any opinions on which of these bases we should (or should not) take up to n=250K in a new PRPnet server? The only thing is that it wouldn't seem as bad as it looks because many of the bases are already at n=200K.
|
[QUOTE=gd_barnes;266613]Wow, well, that's a lot of bases. Do I hear any opinions on which of these bases we should (or should not) take up to n=250K in a new PRPnet server? The only thing is that it wouldn't seem as bad as it looks because many of the bases are already at n=200K.[/QUOTE]
I suggest excluding those at 200K for the moment. That should cut the list almost in half. Once all are at 200K, you can decide what is next. |
[QUOTE=rogue;266619]I suggest excluding those at 200K for the moment. That should cut the list almost in half. Once all are at 200K, you can decide what is next.[/QUOTE]
OK, that's reasonable. The only problem is that it's still 29 bases...quite a few. Contrarily, the problem if we cut it off at base 150 is that it would only be 14 bases. There's a lot of bases 150-200. Hum...how about we cut it off at base 180. That would leave 20 bases...about perfect as far as I'm concerned. So what does everyone think of including the 20 bases with the following parameters in the new PRPnet server?: base <= 180 search depth n<200K k's remaining <= 5 We would sieve them all to n=500K and test them to n=200K. This would be phase 1 of the new server. We could then decide whether to take the phase 1 bases higher (to maybe n=250K) or start on (perhaps) bases <= 50 and test them from (maybe) n=~200K to ~500K or higher, which might be called phase 2. With a couple of heavy hitters on phase 1 like Ian or Lennart fairly consistently, it would not be a real long drive. We would then move on to the real time-consuming stuff in phase 2. In effect, phase 1 would be a good "set up" for phase 2 where many big top-5000 primes are found. Gary |
[QUOTE]So what does everyone think of including the 20 bases with the following parameters in the new PRPnet server?:
[/QUOTE]I don't see why we need phases. I liked Mathew's last list. I would not be in favor of cutting it down. I also believe sieving it past the stated limit of the drive, n=250K, would be a waste of time, especially if we find a prime. |
If you ask me, just take Mathew's last list and feed it into the server up to n=250k. It's probably enough work to keep us busy for quite some time, but that is not a bad thing.
|
Gary, I think that this discussion on the future of port 1300 (and the new port) should be put into another thread.
|
[QUOTE=rogue;266694]Gary, I think that this discussion on the future of port 1300 (and the new port) should be put into another thread.[/QUOTE]
We have several opinions now ranging from make it a smaller effort to make it a large one so it will be a little while before we decide. I'll continue the discussion on Weds. Any other thoughts from anyone? It has been my intent to create a separate thread for this discussion. I'll do that on Weds. |
Mathew's list. :tu::tu::tu:
Lennart :smile: |
I'm sieving R191 and R200 100K-250K to 2T
(From Mathew's list) |
I think we have 4 votes (Mathew, Ian, Peter, and Lennart) to take Mathew's list, i.e. all bases <= 200 with <= 5 k's remaining, to n=250K. I'd prefer a little smaller effort but that's OK with me. It's a large effort but certainly one that we can do.
The next question is how much to sieve for each base. I like Ian's suggestion better than the one that I had suggested, that is sieveing everything to n=500K. If we do as he suggested, we should keep in mind that only sieving n=200K-250K is very inefficient due to such a small n-max/n-min ratio. It's best to sieve at least a 2x ratio so here is what I would suggest: Bases currently at n=50K or 100K; sieve n=50K-250K or 100K-250K Bases currently at n=150K; sieve n=150K-300K Bases currently at n=200K; sieve n=200K-500K I suggest a large range for the n=200K bases because a large majority of those only have 1 or 2 k's remaining and we'll need a big range given their already high search depth to have a decent chance of proof down the line after this server has completed its effort. It would involve much deeper sieving for those and we would not need to load them in the server right away. In other words, we can sieve just the bases at n=50K or 100K to begin with, which won't take as long to sieve as the others due to the smaller sieve range. Thoughts? Gary |
Makes sense to me. Now all we need is a limit for each range and I'm useless when it comes to those.
When you go to create the thread for this drive, you should include 2 tables up front. The normal primes found table and a sieving status table. I think we can handle everything in one thread. |
[QUOTE=MyDogBuster;266746]Makes sense to me. Now all we need is a limit for each range and I'm useless when it comes to those.
When you go to create the thread for this drive, you should include 2 tables up front. The normal primes found table and a sieving status table. I think we can handle everything in one thread.[/QUOTE] I would say: Sieve everything to P=5T to start with and then we'll determine the optimum depth for each base from there. For such a high n-range on such high bases, I can guarantee that the optimum will be P>5T for all of them. (I've usually had optimums in the P=2T-3T range just for n=50K-100K, although that is for 20-50 k's remaining.) Likely it will be P>10T for all of them but I'd prefer not to over sieve any of them. Sieving will be no small task on this! We'll likely need a fair amount of Lennart's massive resources for it, especially for the the bases that we'll be sieving for n=200K-500K. I'll have to take stock of the sieve files that Mathew has sent me in the last 2-3 days. Anything before that is uploaded on the reservations pages. I'll probably get those uploaded later on Monday. |
Someone correct me on this one. Optimal sieve depth is usually calculated using 1 core sieving to find the depth for 1 core testing. If we have say 20 cores testing, wouldn't the optimal sieve depth be 20 times smaller because you can run 20 times more tests in the same time you could run 1 test?
|
[QUOTE=MyDogBuster;266748]Someone correct me on this one. Optimal sieve depth is usually calculated using 1 core sieving to find the depth for 1 core testing. If we have say 20 cores testing, wouldn't the optimal sieve depth be 20 times smaller?[/QUOTE]
Only if you're sieving on one core and testing on 20 cores. If you're sieving and testing with approximately the same number of cores, it balances out the same as if you're using just 1 core for both. Note that this assumes you're only minimizing wall-clock time; to minimize CPU time, you need to calculate optimal depth as if you're both sieving and testing on one core, regardless of how many are doing each. This makes for a more efficient use of resources in the long run, even if it does extend the wall-clock time a bit. |
[QUOTE=MyDogBuster;266748]Someone correct me on this one. Optimal sieve depth is usually calculated using 1 core sieving to find the depth for 1 core testing. If we have say 20 cores testing, wouldn't the optimal sieve depth be 20 times smaller because you can run 20 times more tests in the same time you could run 1 test?[/QUOTE]
I wasn't clear on what Max was saying so I thought I would elaborate. The answer to your question is a clear no in all circumstances. Even if you have 20 cores testing and 1 core sieving, it's still more efficient to sieve to the true optimum depth all of the time (and possibly use a percentage of your cores on other's sieved files; see below). Why? Because sieving is only 5-10% of total test effort. If you are, as you said, only sieving everything to P=500G for testing high limits, you're wasting quite a bit of testing time. In other words, if you wanted to sieve and test all of the time, at all times you should use 1-2 cores sieving while 20 cores are testing, which means you'd spend 5-10% of your total resources sieving. Many people make the mistake of under-sieving because they are in a hurry to find primes. All of that said, even if sieving were 50% of total test effort, you're still better off using that one sieving core all of the time to sieve to the optimum depth and ONE other core to test all of the time on your own sieved files (i.e. 50% of total CPU effort)...leaving the remaining 18 cores to test what others have already sieved. That is why we have many pre-sieved efforts here. Many people don't have good sievers or they just don't like to sieve and others like Lennart and Mathew like to sieve a lot. I'm kind of in the middle. I like to sieve for a while on 1-3 quads (I have 6 decent sieving quads) but it gets old after a period of time. I hope this puts it in logical perspective for you. Edit: For any team drive such as this, we always want to minimize total CPU time because there are very many varied interests here and there is always someone to do what others don't want to do. |
You guys are never going to convince me that sieving to 5T for testing to 200K is saving me time.
I've done plenty of testing on the 1kers to 200K. It takes me 1 day to sieve a base to 500B using 1 core. On an average weight base I get about 4000 tests to run. It would take me another 9 days to get it to 5T. My average test time for 100K to 200K is 1.6 hours. That's 180 tests per day for 12 cores. That's 22 days to test the entire 4000 tests. By going to 5T on the sieve, I eliminated another 360 tests from the 4000. It would take me 2 days to test those 360, so in other words, I spent 9 more days eliminating something I could have tested in 2 days. Where's the savings? Sure, if I had waited the 10 days for the sieve to finish, it would have taken me only 20 days to fully test the 3640 tests, instead of 22 days. Assuming I use the 13 cores serially(sieve a base, test a base), it takes me 23 days to do it my way and 30 days to do it your way. Again, what am I missing here? I believe the problem is that the sieving programs are skewing the times it takes to eliminate a test. Thousands of tests are eliminated up front and very few on the back end. What we really need to know is how long it is taking to eliminate a test on the back end only. I've seen sieves report that the average elimination time per test is 10 minutes, but the program hasn't eliminated anything in over a hour or 2. |
I realy hope that the sieves not end at 200k !!!
Thats a waste of time. Sieve should at least go to n=1M Can you give me 6 bases so I can start sieving ? I don't know wich bases have a sievefile or not. Lennart |
I have started sieving
r67 r70 r103 r133 n 100k-1M Lennart |
I have started sieving r158 & r162
r158 n 100k-1M r162 n 50k-1M Lennart |
[QUOTE=MyDogBuster;266761]I believe the problem is that the sieving programs are skewing the times it takes to eliminate a test. Thousands of tests are eliminated up front and very few on the back end. What we really need to know is how long it is taking to eliminate a test on the back end only. I've seen sieves report that the average elimination time per test is 10 minutes, but the program hasn't eliminated anything in over a hour or 2.[/QUOTE]
The sieving programs (unfortunately) record the average time for all factors removed. So if you started two days ago, then it will average over those two days. There are a few ways around this to determining the current rate, but the easiest for me was to hack srsieve and sr2sieve so that they report the average time for removal for the most recent 30 factors or so. The only caveat is that it doesn't checkpoint that information so if the sieve is stopped and restarted, it would start with a factor count of 0 for that calculation (total factor count is still correct). As to what you are missing, it is the fact that you are sieving on one core while PRP testing on multiple cores. If you were to split up sieving across all of your cores (a range of 5e11 per core), then it would take you one day to sieve to 6.5e12 and eliminate two days of PRP testing. In other words, it would take you 21 days (with this method) instead of 23 days to complete that range. The easier thing to do (rather than switching all of your core between PRPing and sieving) is to dedicate one to sieving and sieve your next range to optimal depth while the others are PRPing your current range. What I do (and I presume others do something similar) is that since I also have multiple cores (and am using PRPNet), I will stop PRP testing on a core a few weeks before the range is done and begin sieving my next reservation. Once sieving completes, I load it into a server then switch that core back to PRP testing. Note that I do this with two PRPNet servers running and use a 50:50 split between the two in the clients. It takes slightly longer to complete the older reservation, but with single k conjectures I want to avoid the possibility of idle clients if a prime were to be found quickly. |
[QUOTE]The sieving programs (unfortunately) record the average time for all factors removed. So if you started two days ago, then it will average over those two days. There are a few ways around this to determining the current rate, but the easiest for me was to hack srsieve and sr2sieve so that they report the average time for removal for the most recent 30 factors or so. The only caveat is that it doesn't checkpoint that information so if the sieve is stopped and restarted, it would start with a factor count of 0 for that calculation (total factor count is still correct).[/QUOTE]
I have a simple solution for the average time problem. When I see a screen load of 1 minute summaries without an n being eliminated, I've sieved far enough. (Highly scientific). Switching between sieving and PRP'ing is also not fun. I usually dedicate a couple of cores strictly for sieving and the rest for testing. One other factor, especially on the CRUS stuff, is that once we find a prime, the rest of the tests are eliminated. I know we can't count on finding a prime, but they are found and significant tests are eliminated. |
[QUOTE=MyDogBuster;266789]I have a simple solution for the average time problem. When I see a screen load of 1 minute summaries without an n being eliminated, I've sieved far enough. (Highly scientific). Switching between sieving and PRP'ing is also not fun. I usually dedicate a couple of cores strictly for sieving and the rest for testing.
One other factor, especially on the CRUS stuff, is that once we find a prime, the rest of the tests are eliminated. I know we can't count on finding a prime, but they are found and significant tests are eliminated.[/QUOTE] I've submitted my change to Geoff for sr5sieve/sr2sieve. Hopefully he will implement it. If not, I will provide source updates to those who are interested. I could even provide a Win32 or Win64 build, but I would prefer that it starts with Geoff. On a typical search sieving until the removal rate is about 2/3's of the longest PRP test is typically optimal (without a lot of time spent looking at FFT sizes, etc.). For CRUS, if about 30% of the k are removed when going from n to 2n, then a removal rate of about 1/2 is better. |
[QUOTE=rogue;266791]I've submitted my change to Geoff for sr5sieve/sr2sieve. Hopefully he will implement it. If not, I will provide source updates to those who are interested. I could even provide a Win32 or Win64 build, but I would prefer that it starts with Geoff. ...[/QUOTE]
As long as you're chatting with Geoff, any chance you could ask about why our teslas give computing errors for Primegrid's ppssieve and gcwsieve? For example [code] LD_LIBRARY_PATH=/usr/local.hide/cuda/lib64:$LD_LIBRARY_PATH ./tpsieve-cuda-x86_64-linux -p 100000e9 -P 100001e9 -k 3 -K 9999 -n 2M -N 3M -c60 -t 4 tpsieve version cuda-0.2.3b (testing) Compiled Jun 25 2011 with GCC 4.1.2 20080704 (Red Hat 4.1.2-48) nstart=2000000, nstep=34 nstep changed to 32 tpsieve initialized: 3 <= k <= 9999, 2000000 <= n < 3000000 Sieve started: 100000000000000 <= p < 100001000000000 Thread 0 starting Thread 1 starting Thread 2 starting Thread 3 starting Detected GPU 1: Tesla C2070 Detected compute capability: 2.0 Detected 14 multiprocessors. Detected GPU 3: Tesla C2050 Detected compute capability: 2.0 Detected 14 multiprocessors. Detected GPU 0: Tesla C2070 Detected compute capability: 2.0 Detected 14 multiprocessors. Detected GPU 2: Tesla C2050 Detected compute capability: 2.0 Detected 14 multiprocessors. Computation Error: no candidates found for p=100000762311649. Thread 0 completed Waiting for threads to exit Thread 2 completed Thread 3 completed Thread 1 completed Sieve complete: 100000000000000 <= p < 100001000000000 Found 0 factors count=31019409,sum=0x284af85735fd771f Elapsed time: 15.42 sec. (0.02 init + 15.40 sieve) at 64939957 p/sec. Processor time: 5.10 sec. (0.02 init + 5.08 sieve) at 197051081 p/sec. Average processor utilization: 1.03 (init), 0.33 (sieve) [/code] and [code] > Does > LD_LIBRARY_PATH=/usr/local.hide/cuda/lib64:$LD_LIBRARY_PATH ./tpsieve-cuda-x86_64-linux -p 100000e9 -P 100001e9 -k 3 -K 9999 -n 2M > -N 3M -c60 -t 1 -d X > work? X = 1,2,3 or 4 // one card of the 4(?) in the box... No, doesn't look like it --- no error message, but no factors found either (much less 73). -Bruce* -t 1 -d 2 tpsieve version cuda-0.2.3b (testing) Compiled Jun 25 2011 with GCC 4.1.2 20080704 (Red Hat 4.1.2-48) nstart=2000000, nstep=34 nstep changed to 32 tpsieve initialized: 3 <= k <= 9999, 2000000 <= n < 3000000 Sieve started: 100000000000000 <= p < 100001000000000 Thread 0 starting Detected GPU 2: Tesla C2050 Detected compute capability: 2.0 Detected 14 multiprocessors. Thread 0 completed Waiting for threads to exit Sieve complete: 100000000000000 <= p < 100001000000000 Found 0 factors count=31019409,sum=0x284af85735fd771f Elapsed time: 58.21 sec. (0.02 init + 58.19 sieve) at 17186248 p/sec. Processor time: 4.33 sec. (0.02 init + 4.30 sieve) at 232341768 p/sec. Average processor utilization: 1.08 (init), 0.07 (sieve) --- LD_LIBRARY_PATH=/usr/local.hide/cuda/lib64:$LD_LIBRARY_PATH ./tpsiev e-cuda-x86_64-linux -p 100000e9 -P 100001e9 -k 3 -K 9999 -n 2M -N 3M -c60 -t 1 -d 0 tpsieve version cuda-0.2.3b (testing) Compiled Jun 25 2011 with GCC 4.1.2 20080704 (Red Hat 4.1.2-48) nstart=2000000, nstep=34 nstep changed to 32 tpsieve initialized: 3 <= k <= 9999, 2000000 <= n < 3000000 Sieve started: 100000000000000 <= p < 100001000000000 Thread 0 starting Detected GPU 0: Tesla C2070 Detected compute capability: 2.0 Detected 14 multiprocessors. Thread 0 completed Waiting for threads to exit Sieve complete: 100000000000000 <= p < 100001000000000 Found 0 factors count=31019409,sum=0x284af85735fd771f Elapsed time: 58.25 sec. (0.02 init + 58.23 sieve) at 17175351 p/sec. Processor time: 4.48 sec. (0.02 init + 4.46 sieve) at 224418059 p/sec. Average processor utilization: 1.02 (init), 0.08 (sieve) [/code] Microcruncher* and I seem to be stuck. (And Greetings! after such a long time since the start of the Rogue-Garo Tables of Cunningham curve counts. Sorry for the off-topic inquiry. -bdodson) |
[QUOTE=bdodson;266797]As long as you're chatting with Geoff, any chance you could ask about
why our teslas give computing errors for Primegrid's ppssieve and gcwsieve? Microcruncher* and I seem to be stuck. (And Greetings! after such a long time since the start of the Rogue-Garo Tables of Cunningham curve counts. Sorry for the off-topic inquiry. -bdodson)[/QUOTE] I suggest that you contact him at g_w_reynolds at yahoo.co.nz. You would most likely get a faster response. |
[QUOTE=MyDogBuster;266761]You guys are never going to convince me that sieving to 5T for testing to 200K is saving me time.
I've done plenty of testing on the 1kers to 200K. It takes me 1 day to sieve a base to 500B using 1 core. On an average weight base I get about 4000 tests to run. It would take me another 9 days to get it to 5T. My average test time for 100K to 200K is 1.6 hours. That's 180 tests per day for 12 cores. That's 22 days to test the entire 4000 tests. By going to 5T on the sieve, I eliminated another 360 tests from the 4000. It would take me 2 days to test those 360, so in other words, I spent 9 more days eliminating something I could have tested in 2 days. Where's the savings? Sure, if I had waited the 10 days for the sieve to finish, it would have taken me only 20 days to fully test the 3640 tests, instead of 22 days. Assuming I use the 13 cores serially(sieve a base, test a base), it takes me 23 days to do it my way and 30 days to do it your way. Again, what am I missing here? I believe the problem is that the sieving programs are skewing the times it takes to eliminate a test. Thousands of tests are eliminated up front and very few on the back end. What we really need to know is how long it is taking to eliminate a test on the back end only. I've seen sieves report that the average elimination time per test is 10 minutes, but the program hasn't eliminated anything in over a hour or 2.[/QUOTE] Ian, you need to take a closer look and watch the sieving programs eliminate the tests. The avg. removal time can be inaccurate and skewed towards the front end BUT if you stop the program, remove some factors, and then restart it with the sieve file, say, at about P=1T, you will get a much more accurate time of factor removal. You should be running your one core sieving all the time while the other 20 cores are testing. If that is the case, on average, it should take close to the same amount of time to sieving as it does to test and there will be CPU time savings. You are not thinking of CPU time. Think of it like this: If your sieving program eliminates 360 tests in one day it will have saved 20 computers 18 tests each. Yeah, those 20 computers could have done those 360 tests 20 times (i.e. 18 tests each) as fast but in the time that it took them, they could have been working on something else. Here is what you are ending up with: 1. Let's say you sieve from P=500G to 1T, you eliminate 100 tests, and it takes you one day to do that (24 CPU hours). 2. Since clearly that is not far enough for sieving n=100K-200K, I think it's fair to say that each test would take at least 30 CPU minutes. 3. 100 tests * 30 CPU minutes = 50 CPU hours. So...you have taken 50 CPU hours to test what could have been eliminated in 24 CPU hours sieving. Those additional 26 CPU hours could have been testing something else in the mean time; perhaps on testing an already-well-sieved base 6 file or something like that. This is a real example of what is happening to you. Please trust me, only sieving n=100K-200K for high bases to P=500G is wasting a lot of overall CPU time. Lennart and Curtis (at RPS) could quickly confirm this. Even if you have only 1 core for sieving for every 20 cores testing, that's enough. You just have to be patient in getting started. Edit: Mark also had an excellent example that looked at the inefficiency in a little different way. Gary |
[QUOTE=rogue;266791]I've submitted my change to Geoff for sr5sieve/sr2sieve. Hopefully he will implement it. If not, I will provide source updates to those who are interested. I could even provide a Win32 or Win64 build, but I would prefer that it starts with Geoff.
On a typical search sieving until the removal rate is about 2/3's of the longest PRP test is typically optimal (without a lot of time spent looking at FFT sizes, etc.). For CRUS, if about 30% of the k are removed when going from n to 2n, then a removal rate of about 1/2 is better.[/QUOTE] I've noticed that problem. Therefore when sieving, I stop at P=1T, remove factors, and start a new sieve. I then do it again at P=5T or 10T if doing a large sieve. Then the average removal rate is quite good. |
[QUOTE=Lennart;266770]I realy hope that the sieves not end at 200k !!!
Thats a waste of time. Sieve should at least go to n=1M Can you give me 6 bases so I can start sieving ? I don't know wich bases have a sievefile or not. Lennart[/QUOTE] 1. See the suggestions that I made. We are suggesting sieving either n=100K-250K, 150K-300K, or 200K-500K, depending on the current testing limit of the bases being sievied. We've gone back and forth on this quite a bit. I think Mark said n=500K on everything a long time ago, Ian just saying to sieve to n=250K, and me the suggestion that I just put forth. I suppose we could change it back to sieve all to n=500K but we will need to decide fairly quickly what is best. 2. In our (my) humble opinion, sieving to n=1M is a waste of time because: (a) A large chance of prime will make (part or all of) the rest of the file obsolete. (b) The increasing speed of computer resources makes sieving anything that won't be tested within 2-3 years take a lot more of today's resources than tomorrow's resources. Just since I've been prime searching in 4 years, sr(x)sieve has more than doubled in speed (mainly 64-bit vs. 32-bit), LLR has probably added about 25-30% in speed, and PFGW has increased in speed 5 TIMES for non-power-of-2 bases. The latter has brought down the amount of sieving needed for non-power-of-2 bases substantially. For that reason, we had several large files that were well over-sieved. (c) Many of these bases will not be tested to n>500K for many years. How many people are going to want to test base 200 to n=1M when there are plenty of smaller bases that are at smaller depths. Even with various team efforts, we'll be testing smaller bases to n=1M first. Several of the bases < 32 have been sieved to n=1M but we are not testing those with this effort since we're only testing bases at n<=200K with this effort. 3. I'll see if I can get our resident multi-base sieving guy, Matthew, to give us a list of everything that has been sieved. I have been out of town and just do not have the time to dedicate to details. The files that have been sieved are on the right side of the reservations pages. The sieve depth in each of them should be accurate. Gary |
Mathew,
Can you provide a summary of all of the files that have been sieved and their sieve depths for your most recent list of bases <= 200 with <= 5 k's remaining searched to n<250K (that aren't reserved) ? I appreciate it. I would do it if I had time. Gary |
[QUOTE=Lennart;266775]I have started sieving
r67 r70 r103 r133 n 100k-1M Lennart[/QUOTE] [QUOTE=Lennart;266781]I have started sieving r158 & r162 r158 n 100k-1M r162 n 50k-1M Lennart[/QUOTE] Lennart, Thanks for your enthusiasm for this! Can you hold off just for a little while until Mathew lists the sieve files that we already have? 2 of these bases already have files, although for a much smaller range. Thanks, Gary |
[Code]
Riesel [optimum sieve depth for testing to n=250K inside () at end of row] ------ 61 (100K) 4k Sieve File 100K-1M to 28T (opt. 28T)* 67 (100K) 5k Sieve File 100K-1M to 27T (opt. 27T)* 70 (100K) 3k Sieve File 100K-1M to 20T (opt. 20T)* 80 (200K) 3k Sieve File 200K-1M to 17T (opt. 17T)* 93 (200K) 1k Sieve File 200K-1M to 22T (opt. 22T)* 94 (200K) 1k Sieve File 200K-1M to 16T (opt. 16T)* 100 (200K) 1k Sieve File 200K-1M to 19T (opt. 19T)* 103 (100K) 2k Sieve File 100K-1M to 24T (opt. 24T)* 109 (200K) 1k Sieve File 200K-1M to 22T (opt. 22T)* 112 (150K) 3k Sieve File 150K-1M to 42T (opt. 42T)* 123 (100K) 2k Sieve File 100K-1M to 24T (opt. 24T)* 133 (100K) 2k Sieve File 100K-1M to 27T (opt. 27T)* 152 (200K) 1k Sieve File 200K-1M to 15T (opt. 15T)* 158 (100K) 3k Sieve File 100K-1M to 22T (opt. 22T)* 160 (200K) 1k Sieve File 200K-1M to 26T (opt. 26T)* 162 (50K) 5k Sieve File 50K-1M to 58T (opt. 58T)* 163 (100K) 1k Sieve File 100K-1M to 19T (opt. 19T)* 172 (50K) 5k Sieve File 50K-1M to 41T (opt. 41T)* 173 (100K) 1k Sieve File 100K-1M to 16T (opt. 16T)* 177 (100K) 1k Sieve File 100K-1M to 16T (opt. 16T)* 181 (100K) 1k Sieve File 100K-1M to 33T (opt. 33T)* 182 (100K) 1k Sieve File 100K-1M to 17T (opt. 17T)* 191 (100K) 2k Sieve File 100K-1M to 25T (opt. 25T)* 200 (100K) 2k Sieve File 100K-1M to 47T (opt. 47T)* Sierpinski [optimum sieve depth for testing to n=250K at end of row] ---------- 37 (200K) 3k Sieve File 200K-1M to 15T (opt. 10T)* 55 (200K) 4k Sieve File 200K-1M to 24T (opt. 24T)* 68 (200K) 2k Sieve File 200K-1M to 20T (opt. 14T)* 70 (100K) 5k Sieve File 100K-1M to 53T (opt. 28T)* 73 (200K) 2k Sieve File 200K-1M to 21T (opt. 21T)* 75 (100K) 2k Sieve File 100K-1M to 15T (opt. 10T)* 86 (200K) 1k Sieve File 200K-1M to 15T (opt. 14T)* 100 (100K) 5k Sieve File 100K-1M to 34T (opt. 34T)* 102 (100K) 3k Sieve File 100K-1M to 23T (opt. 23T)* 107 (100K) 4k Sieve File 100K-1M to 15T (opt. 14T)* 112 (150K) 2k Sieve File 200K-1M to 28T (opt. 28T)* 118 (200K) 1k Sieve File 200K-1M to 17T (opt. 17T)* 122 (200K) 1k Sieve File 200K-1M to 15T (opt. 13T)* 133 (100K) 3k Sieve File 100K-1M to 37T (opt. 37T)* 135 (50K) 5k Sieve File 50K-1M to 38T (opt. 38T)* 140 (100K) 2k Sieve File 100K-1M to 19T (opt. 19T)* 148 (150K) 1k Sieve File 150K-1M to 30T (opt. 30T)* 155 (200K) 1k Sieve File 200K-1M to 15T (opt. 14T)* 157 (100K) 3k Sieve File 100K-1M to 27T (opt. 27T)* 165 (100K) 4k Sieve File 100K-1M to 52T (opt. 52T)* 173 (200K) 1k Sieve File 200K-1M to 24T (opt. 24T)* 174 (200K) 1k Sieve File 200K-1M to 15T (opt. 15T)* 183 (150K) 1k Sieve File 150K-1M to 40T (opt. 40T)* 185 (100K) 1k Sieve File 100K-1M to 21T (opt. 21T)* 187 (100K) 1k Sieve File 100K-1M to 33T (opt. 33T)* 189 (100K) 1k Sieve File 100K-1M to 25T (opt. 25T)* 191 (50K) 4k Sieve File 50K-1M to 46T (opt. 46T)* [/Code]* - sieveing complete Like this? Lennart, R133,162 already have sieve files. I think R133 is good where it is. However extending R162 could not hurt. |
[CENTER][LEFT]I'll do
R67 R70 R103 R158 100K-1M r133 200k-1M R162 100K-1M Is that ok ? Lennart [/LEFT] [/CENTER] |
[QUOTE=Lennart;266865][CENTER][LEFT]I'll do[/LEFT]
[/CENTER] R67 R70 R103 R158 100K-1M [LEFT]r133 200k-1M[/LEFT] R162 100K-1M [LEFT]Is that ok ?[/LEFT] [LEFT]Lennart[/LEFT] [/QUOTE] Lennart, Yes that is fine. To be even more helpful, for the bases that already have a sieve file, please combine them with yours when you reach the depth of the files already sieved. That should add very little additional time to the sieving. How about this though: Would you like to sieve all bases for the new server to n=1M? I'm asking because I know you like sieving and have a lot of resources. If you decide to do that, what I would suggest is that you sieve them all to P=10T and send me the files. I'll then calculate an optimum depth on each one for breaking off testing at n=250K. I think P=10T for quite a few of the bases would be deep enough for testing to n=250K although it may be quite a bit higher since sieving a large n-range. I'm just now back from my trip and need time to think through what the best course of action is. If you only choose to sieve part of the files to n=1M, I think I'm going to suggest that the rest of us sieve the remaning bases to n=500K and P=5T and then I'll calculate an optimum depth at that point. This is all just seat-of-my-pants thinking right now. It'll probably be late tonight or on Thurs. before I can do a detailed look at things. One thing that has crossed my mind is to include some reserved bases also for the n-range above which they are reserved to (after checking to make sure the individual does not plan to continue higher). That would probably be at most 5 more bases. I kind of hate to leave them out and only searched to n=~100K just because someone happened to have been testing them for n=~50K to ~100K when we started this effort. Gary |
[QUOTE=Mathew;266860]
<snip> Like this? Lennart, R133,162 already have sieve files. I think R133 is good where it is. However extending R162 could not hurt.[/QUOTE] Awesome list! Thanks Mathew. |
I'm going to take R93 from 200K to 500K up to 5T
|
[QUOTE=gd_barnes;267050]Lennart,
Yes that is fine. To be even more helpful, for the bases that already have a sieve file, please combine them with yours when you reach the depth of the files already sieved. That should add very little additional time to the sieving. How about this though: Would you like to sieve all bases for the new server to n=1M? I'm asking because I know you like sieving and have a lot of resources. If you decide to do that, what I would suggest is that you sieve them all to P=10T and send me the files. I'll then calculate an optimum depth on each one for breaking off testing at n=250K. I think P=10T for quite a few of the bases would be deep enough for testing to n=250K although it may be quite a bit higher since sieving a large n-range. I'm just now back from my trip and need time to think through what the best course of action is. If you only choose to sieve part of the files to n=1M, I think I'm going to suggest that the rest of us sieve the remaning bases to n=500K and P=5T and then I'll calculate an optimum depth at that point. This is all just seat-of-my-pants thinking right now. It'll probably be late tonight or on Thurs. before I can do a detailed look at things. One thing that has crossed my mind is to include some reserved bases also for the n-range above which they are reserved to (after checking to make sure the individual does not plan to continue higher). That would probably be at most 5 more bases. I kind of hate to leave them out and only searched to n=~100K just because someone happened to have been testing them for n=~50K to ~100K when we started this effort. Gary[/QUOTE] I take this first. R172 (50K) 5k Sieve File 50K-100K to 1T S135 (50K) 5k Sieve File 50K-100K to 1T S37 (200K) 3k R100 (200K) 1k S 100 (100K) 5k I do them to n=1M and combine the existing sievefiles I will sieve them to 15T and send them to you. I have 4 bases at 13T now. Lennart |
[QUOTE=Lennart;267054]I take this first.
R172 (50K) 5k Sieve File 50K-100K to 1T S135 (50K) 5k Sieve File 50K-100K to 1T S37 (200K) 3k R100 (200K) 1k S 100 (100K) 5k I do them to n=1M and combine the existing sievefiles I will sieve them to 15T and send them to you. I have 4 bases at 13T now. Lennart[/QUOTE] I now have several questions: 1. Are you still sieving all of the files in [URL="http://www.mersenneforum.org/showpost.php?p=266865&postcount=61"]this post[/URL] to n=1M? If so, like you are doing with the above 5 bases, I will assume that you will also combine the sieve files with files already done. 2. If yes to #1, will you be sieving them to P=15T also? 3. After you are done with the current 11 bases, will you continue sieving 5 or 6 bases at a time like this? 4. If yes to #3, will they continue taking you only 2-3 days to do 5-6 bases? 5. If yes to #4, would you care to do them all to P=15T? I'm asking all of this because if the answer to #4 and #5 is yes, I will suggest that others just stop their efforts and leave the sieving to you. That will be a lot easier for you and me than coordinating with others on the same bases. It also means that all bases will be ready to go in ~2-1/2 weeks, which would be an excellent accomplishment. I think what I will do for a little while is edit Mathew's post for sieving efforts past and present. Edit: That is now up to date. Gary |
Everyone please check Mathew's post [URL="http://www.mersenneforum.org/showpost.php?p=266860&postcount=60"]here[/URL] to see if I have it updated correctly to include all current sieving efforts for the new server.
|
R200 sieved 100K-250K to 2T
File emailed |
[QUOTE=gd_barnes;267079]I now have several questions:
1. Are you still sieving all of the files in [URL="http://www.mersenneforum.org/showpost.php?p=266865&postcount=61"]this post[/URL] to n=1M? If so, like you are doing with the above 5 bases, I will assume that you will also combine the sieve files with files already done. 2. If yes to #1, will you be sieving them to P=15T also? 3. After you are done with the current 11 bases, will you continue sieving 5 or 6 bases at a time like this? 4. If yes to #3, will they continue taking you only 2-3 days to do 5-6 bases? 5. If yes to #4, would you care to do them all to P=15T? I'm asking all of this because if the answer to #4 and #5 is yes, I will suggest that others just stop their efforts and leave the sieving to you. That will be a lot easier for you and me than coordinating with others on the same bases. It also means that all bases will be ready to go in ~2-1/2 weeks, which would be an excellent accomplishment. I think what I will do for a little while is edit Mathew's post for sieving efforts past and present. Edit: That is now up to date. Gary[/QUOTE] 1. Yes 2 Yes 3 Yes 4 Yes 5. It's ok to do them all. :smile: Lennart |
Okay, I'm dropping my R93 and R191 sieves. Ya had to twist my arm to get that out. :max:
Table updated |
Taking
R80 (200K) 3k R93 (200K) 1k R94 (200K) 1k S55 (200K) 4k 200k-1M Lennart |
Lennart has completed sieving R70, R103, and R133 for n=xxx-1M to P=15T. The sieving post has been updated for those and other reservations.
Note: Since the bases are now considered officially reserved by the new server, I will not be posting these on the reservations pages. I will soon update the reservations to include these bases as reserved by "PRPnet2". |
I've done a more detailed analysis of the bases to be included in the drive as well as some potential future bases that might be included.
The following bases have <= 5 k's remaining and are reserved to the depth shown: R123 (reserved to n=150K, last status in Feb., PM sent) R159 (reserved to n=200K, current status) S49 (250K, current) S87 (300K, current) S118 (200K, current) S148 (250K, last status in May, PM sent) The following have a very good chance of having <= 5 k's remaining at n=100K: R157 6k at n=50K (not reserved) R187 6k at n=50K (not reserved) S165 7k at n=35K (reserved to n=50K, last status in May, PM sent) I think I will reserve and test the latter 3 bases to n=100K. There might be additional bases that could potentially reach 5 k's by n=100K that are not shown here but I perceive their chances to be well < 50% of that happening. Anyone who has a base reserved, please do not feel pressured to release or otherwise "hurry up" your reservation. We are only asking that statuses be provided every 2-3 months. The above is just for future information only for possible inclusion in the drive (if search depth is at n<250K when released) if the drive is still going when the reservation is complete. Gary |
I see that R173 with 1k remaining at n=100K was omitted from the list of bases for the drive. Therefore I am officially adding it. :smile:
|
Mathew has completed the following sieving:
S43 n=200K-500K to P=1.5T R61 n=100K-250K to P=5T S75 n=100K-250K to P=2T Lennart, I decided to go ahead and continue putting a link to sieve files done by others on the reservations pages so that you can continue from them or combine them with yours. |
I take this 4 ( all sierp )
68 (200K) 2k 73 (200K) 2k 75 (100K) 2k Sieve File 100K-250K to 2T 86 (200K) 1k Lennart |
I take this s191.
Lennart |
General status:
Lennart has completed sieving 10 bases to P=15T. See the sieving post as to what has been done. Unfortunately the optimum depth on some of them is going to be higher than that for testing to n=250K. We would like to start testing the four n=50K bases first to stay in the natural order of things. I have received one of those files (R162) sieved to P=15T and have calculated its optimum sieve depth at P=55T. It is one of the higher-weight bases so hopefully most won't be quite so high. Lennart and I are in communications about the best course of action at the moment. If necessary, I may start a team sieving drive to bring some of the bases up to their optimum depth. It will be a lot of work. (Example: It would take 25 cores ~4 days running full time to sieve R162 from P=15T to 55T.) Right now, if I had to give a seat-of-my-pants estimate, it would be that we could start testing at least one if not more of the n=50K bases in ~7 to 10 days and continue adding bases as we finish sieving. My preference would be to start with at least 2 bases. Edit: Max has the server on port 1400 already set up. So we will be able to hit the ground running once the sieving for each base is complete. Gary |
I have added the optimum sieve depth in the sieving post in parens at the end of each row for some bases that have been sieved to P=15T. I'm still working on a timing test for most of them.
It's turning out not to be as bad as I had expected after testing high weight R162. That is an extremely high weight base. Most bases do not need to be sieved nearly as high. Edit: We now have optimums of P=10T, 27T, and 58T (R162 recalculated from 55T) so not so bad. My thinking now is that a team sieving drive will not be necessary and that we can start 1-2 of the n=50K bases by this weekend with all four of them started within 10 days. Lennart has agreed to take all four n=50K bases to optimum as a first priority. I will likely do some P>15T sieving for other n>=100K bases. |
status udpate
Status update:
Lennart expects to have two of our higher-weight n=50K bases, R162 & S135, optimally sieved by Sat. Additionally 10 other lower weight bases should be ready by that time. My current plan is to start the drive with at least those 12 bases on Sun. If all goes according to plan, I'll send all 12 files off to Max late Sat. night and he would get them loaded into the server sometime on Sun. To follow our sieving status, see the sieving post [URL="http://www.mersenneforum.org/showpost.php?p=266860&postcount=60"]here[/URL]. I have also added a link to it in the first post of this thread. Gary |
Adding S165 to the drive. It just dropped to 5 k's remaining at n=~55K. I'm still working on it to n=100K.
|
S173 reserved for PRPnet2 but not on the list???????
|
[QUOTE=MyDogBuster;267894]S173 reserved for PRPnet2 but not on the list???????[/QUOTE]
OK thanks...added. That's strange...we also originally missed have R173 on there per [URL]http://www.mersenneforum.org/showpost.php?p=267164&postcount=74[/URL]. I even show S173 as a base for the drive on my machine, which is why I had it reserved for the drive on the pages. I wonder if I inadvertantly deleted it at some point off of the sieving post. |
I have sent the first 14 bases as shown with asterisks in the sieving post to Max to be loaded into the PRPnet 2 server. The drive will begin later today! We'll continue adding bases as sieving is completed.
Port 1400 will be the new server. I will create a new formal thread for it similar to our regular drive threads late tonight or on Monday. :smile: |
Adding S118 to the drive. Tim just released it with 1k remaining at n=200K.
Adding R123 to the drive. It was released due to inactivity with 2k remaining at n=100K. As previously mentioned, I added S165 to the drive. To finallize it for this drive, I have completed it to n=100K with 4 k's remaining. There are now an even 50 bases in the drive! :-) |
Let the fun begin!
The server is now loaded and ready to go! All 15 bases (the 14 Gary mentioned two posts up, plus R133 which finished sieving today) are in the server from their respective testing depths up to n=250K.
Mathew and Lennart gave the new drive a rolling start, with 9 cores set up to start working on the server as soon as work became available. I may just have to move a quad over myself and join in the fun while we've got all these tiny tests at the beginning of the drive. :smile: The all-in-one stats web page for this server can be found at: [url]http://noprimeleftbehind.net:1400/all.html[/url] Some of you may have already noticed that port 1400 is running the latest PRPnet 4.3.5, versus 4.1.4 like the rest of our servers. I will eventually be upgrading all the servers, but in the meantime will be using this one server to get accustomed to the new version and work out any little bugs that may crop up. The most visible changes to the new version are mainly cosmetic; some additional information has been added to the web pages, such as a column on the pending tests table showing how long before a test expires (a useful converse to the "Age" column), and a few alternate angles on the existing user and team stats data. I'm not entirely sure why all the tables for each base are now headed with merely "Server Stats" instead of saying which base they're for; I'll fire off an email to Mark about that in a few minutes. Max :smile: |
[QUOTE=mdettweiler;268051]Some of you may have already noticed that port 1400 is running the latest PRPnet 4.3.5, versus 4.1.4 like the rest of our servers. I will eventually be upgrading all the servers, but in the meantime will be using this one server to get accustomed to the new version and work out any little bugs that may crop up. The most visible changes to the new version are mainly cosmetic; some additional information has been added to the web pages, such as a column on the pending tests table showing how long before a test expires (a useful converse to the "Age" column), and a few alternate angles on the existing user and team stats data. I'm not entirely sure why all the tables for each base are now headed with merely "Server Stats" instead of saying which base they're for; I'll fire off an email to Mark about that in a few minutes.[/QUOTE]
I'll send a code fix later today. I must have bunged a code merge when implementing the css code. |
[QUOTE=rogue;268067]I'll send a code fix later today. I must have bunged a code merge when implementing the css code.[/QUOTE]
Thanks Mark--got the fix just now and applied it to the server. It seems to have done the trick! :grin: |
OK, the server is running and the scope of work has been decided. Of course this is the moment when a totally different thought pops up :razz:
I am working on R48 and will take it to n=100k. As there will be more than 25 k's left at that point, I expect it to be discontinued. I just can't imagine there to be many people willing to bring 25, 100 or 200 k's to higher n levels. With a distributed effort it might be manageable. The chance of proving such a conjecture is very small of course. Still, when I look at the reservation pages I keep seeing those (rather) low base conjectures that have been started but probably won't be worked on for years because it's too much for an individual. Wouldn't it be nice to get some of those to n=100k (or maybe higher)? I'd be willing to do some sieving. Just a thought. |
I just did one final check over all of the bases <= 200 and it appears that we missed one; S70. Therefore:
Adding S70 to the drive with 5 k's remaining at n=100K. |
| All times are UTC. The time now is 10:13. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.