20200320, 16:27  #23  
Sep 2011
Germany
3·797 Posts 
Quote:
yeah, BOINC and Primenet are both different pair of shoes. The app is in a zip with the ini file. I cant define a userid there. The reserving is one file and will be split into different files. One task for one user. All final results will be sticking to one bigfile and reported back under one user. Last fiddled with by rebirther on 20200320 at 16:27 

20200320, 17:29  #24  
Quasi Admin Thing
May 2005
911_{10} Posts 
Quote:
What kind of modifications would be nescessary? I have just sent Reb a list of the bit levels, to trial factor to, when shooting for a +2 bit level above optimal CPU trial factoring. 

20200320, 18:41  #25  
P90 years forever!
Aug 2002
Yeehaw, FL
2^{3}×863 Posts 
Quote:
As to bit levels, there are (at least) three ways to approach the problem. 1) Give the BOINC client knowledge of optimal bit levels for different exponents. 2) Let the secret web page worry about optimal levels, we just assign the exponent along with starting and desired ending bit levels 3) Don't worry about optimal bit levels. Simply assign exponents that are likely to be PRP'ed over the next 180 days or year and have the client take the TF on the exponent up one bit level. This may do more or less than optimal based on available resources. Last fiddled with by Prime95 on 20200320 at 18:42 

20200320, 19:10  #26  
If I May
"Chris Halsall"
Sep 2002
Barbados
29·311 Posts 
Quote:
Another important component of this whole schema is the timeliness of completion. I don't know how the BOINC server <> client operations work, but the BOINC server should take responsibility for restarting and/or recycling work units given to the clients. As an example, if BOINC took "ownership" of the 102M range to take up to 75 or 76, it should be finished by the time the Cat 1 wavefront reaches there. To be clear, I would /really/ like to see this work. But it will be working beside GPU72, since the two systems are so very different. Heck, one day it might very well eclipse GPU72! 

20200320, 19:10  #27  
Quasi Admin Thing
May 2005
911 Posts 
Quote:
at 2: Really nice idea, wich would also align with no. 1 at 3: Don't really like that, because part of going to BOINC is to get to optimal sievedepth and to try to clean up the "mess" of different ranges having different trial factoring doing on different exponents. Really like the "secret" webpage, made just for BOINC. What might be wort designing such a "secret" webpage for, would be to allow for resultfiles greater than 2MB and to make sure when Reb (or maybe me) reserves work, BOINC actually get's the smallest n available and also that the reservation limit of 1000 candidates is removed on the "secret" webpage Other than that, thanks for your positive attitude and willingness and now let's hope that the initial testing is in fact succesful as hoped Fingers crossed, but hopefully in a couple of weeks the world of GIMPs will have changed drastically to the better on the TF level 

20200320, 19:21  #28  
Quasi Admin Thing
May 2005
911 Posts 
Quote:
When BOINC assign work to a client, the server sets a deadline, this one can in the short term, where we really need to have the work returned in order to keep up with the work Ben Delo among others are doing, be a short deadline. The deadline is defined by the project and not the server, so in theory we can set a deadline of 1 day (if that makes sense). We may not in the beginning be able to keep up with the various categories, but if enough momentum is gained (wich I think it will) by BOINC, then in very short time (especially as long as people are hunting for badges) optimal TF (+2 bit) for even category 3/4(? not sure 4 excist) will be optimally TF and far far far ahead of any wavefront of any category. No doubt you like to see it work. And with the optimism expressed by Rebirther, I'm most confident that it will work. To get most out of our ressources we do in fact need to work together to ensure that BOINC does not step the toes of GPU72 and vice versa and I hope that we all, despite recent postings can find that common ground Stay safe and healthy 

20200322, 11:51  #29 
Romulan Interpreter
Jun 2011
Thailand
5^{2}×7^{3} Posts 
About point 1, optimum bitlevel depends on the hardware you have (according with James' tables, for example). When I get assignments from PrimeNet or Gpu72, I know what hardware I run and I can specify the range and bitlevel to work, to ensure optimum efficiency from the whole project view (i.e. clear the most exponents for time unit, by doing either TL, P1, or LL/PRP). Well, assuming I know what I am doing, and assuming I am well intentioned. If I am still well intentioned, but don't know what I am doing, I can "let gpu72 decide" for me, what work is more important, but yet, this is decreasing the "efficiency" on long term: doing what they serve me may decrease the number of exponents I can clear per the unit of time, with my hardware. Anyhow, I see what ranges are worked, etc. (won't go into details here).
How is this handled with the Boinkboink alternative? Can the user opt what type of "units" he gets to deflea? If you keep no information about user's hardware or preferences, then one user may get units which are not optimum for his rig, and in the long term, slow the project down instead of speeding it up. Of course, this will bring the army of BOINC users into the Gimps project, but yet, there are many things to clarify, including what type of work should be shared with, like can they do PRP/LL, or only factoring (TF/Pm1), and if they do LL/PRP, what happens when they find a prime? I believe that Chris is right here, and his "method" (of letting the users report to base by themselves) is sane^{(TM)}, but on the other hand, I agree that BOINC addition to Gimps would be nice... Of course, I don't fully know what BOINC can really do, and how it works, so skip the moral please, if I said something stupid, but I know they have a huge base of users and many would like to get their hands on that base of users... Last fiddled with by LaurV on 20200322 at 11:52 
20200322, 12:15  #30  
Quasi Admin Thing
May 2005
911 Posts 
Quote:
There has not yet been taken any decision on how to handle work in production mode. Of course there is different ways to handle this, but to be honest, when my OLD GPU did TF it was on ALL candidates 100% efficient for <=75 bit. The bits above that it went down to 90% efficiency in accordence to Ghz/day. I think that it should be up to Reb how to handle the work the users get. I can see both cons and pros by going from current bitlevel to bitlevel +2 above optimal bitlevel, in one testrange per workunit, but lets see what we end up doing. Most important is to attract new users and support to the TF effort 

20200322, 13:42  #31  
Romulan Interpreter
Jun 2011
Thailand
5^{2}·7^{3} Posts 
Quote:
Related to the efficiency, many people get this wrong. It is not about how fast is your GPU TFing at 75 bits compared with how fast it is TFing at 77 bits. That comparison is of no value. It is about how fast you can clear exponents doing TF with that system (CPU+GPU) compared with how fast you can clear the exponents in the same range when you do Pm1 with the same system and compared with how fast you can clear them with LL/PRP with the same system. For me, it takes me about 6 minutes to TF to 76 bit in 100Mbit range (actual front). A bit faster if I find factors, but in average, if I find a factor every ~76 runs (theoretical value, but if you look to the gpu72 tables, I am on the "lucky" side, finding about 1 factor every 72 runs or so), then I can clear one exponent every ~ 8 hours. With the same system, I could run a LL test (in average) every 67 hours. Therefore to clean an exponent by LL+DC would take 1214 hours. So, for my system, is more efficient to do TF at 76 bits than to do LL, at front 100Mbit range. However, if I switch to 77 bits, the TF time would double, for about the same rate of finding factors (like 1 in 77 runs, theoretic value) and I would need 1416 hours to clean one exponent by TF, therefore I would be less efficient with TF, because LL would still need 1214 hours to clean that exponent. Therefore, I frown any time Chris is serving me assignment to 77 bits (well, in fact, these numbers were rounded, to easy the example; at 77 bits TF, I am "on the edge", taking about the same time to find a factor as it would take to run a LL and a DC, in average). That is why we have these limits for bitlevels, depending on the hardware you run. My cards for example are quite good at DP calculus (used for LL/PRP) but other (gaming) cards are more suitable for TF, and you could go much higher with the bitlevel, because they are not so good for LL/PRP (or completely futile). Last fiddled with by LaurV on 20200322 at 13:43 

20200322, 14:23  #32  
Jun 2003
11·421 Posts 
Quote:
The optimum bit level is the maximum bit level which can still prepare enough exponents for the LL testers. If that means normal + 10 bits, so be it. Obviously we shouldn't be "abusing" it  we should be good DC citizens and not do grossly inefficient calculations, so we should go with current max bit level for the best hardware (assuming BOINC can keep up with LL demand). All this concerns about different hardware and relative efficiencies are unneeded complications. It is good to worry about all these /once/ the project gets off the ground and has a stable user base. But doing it now is just a recipe for analysis paralysis. 

20200322, 15:02  #33  
Quasi Admin Thing
May 2005
911 Posts 
Quote:
I'm sure, at least untill many users start to max out on the badges, that a sustainable and stable user base will be the case. Lets see. Hope is that production phase starts next week. My stand will still be to go from current bit level up till +2 bit  simply because it will be no problem for the average GPU user and be the best case scenario for the highend GPU user. Now everyone, let's get BOINC off its feet and see how much ressources we can attract, before starting to care about a bit or 2 of extra trial factoring Last fiddled with by KEP on 20200322 at 15:02 

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Chess World Championship Match  2013, 2014, 2016  Raman  Chess  34  20161201 01:59 
mprime ETA and primenet "days to go" do not match  blip  Software  1  20151120 16:43 
less v4 reservations being made  tha  PrimeNet  8  20080814 08:26 
LL test doesn't match benchmark  drew  Hardware  12  20080726 03:50 
WE MADE IT!!!!!!!!!!!!!!!!!!!!!!  eric_v  Twin Prime Search  89  20070123 15:33 