![]() |
[QUOTE=chalsall;339570]Try working in the scientific and/or engineering fields...
Emotion doesn't (or, at least, shouldn't) enter into it. Nor should BS. Although, admittedly, they often do... But argument is what it's all about! One should be comfortable with "debate" (and admitting when one is wrong, or doesn't know).... :smile:[/QUOTE] I have [I]no[/I] problem with arguing when the end result is beneficial/profitable. But however, much of the fighting here,(and I won't name) is useless and goes more personal than it should be, and over usually stupid things. |
It may be a little beside the point of the argument, but the argument doesn't interest me half as much as the implication that the Stanford University research division running the [EMAIL="folding@home"]folding@home[/EMAIL] project is either a complete sham or full of wankers spouting BS. I find this implication unusually surprising, and am curious why it exists?
|
[COLOR=black][FONT=Verdana][QUOTE=chalsall;339570]Try working in the scientific and/or engineering fields...[/FONT][/COLOR]
[COLOR=black][FONT=Verdana]Emotion doesn't (or, at least, shouldn't) enter into it. Nor should BS. Although, admittedly, they often do...[/FONT][/COLOR] [COLOR=black][FONT=Verdana]But argument is what it's all about! One should be comfortable with "debate" (and admitting when one is wrong, or doesn't know).... [/QUOTE][/FONT][/COLOR] [COLOR=black][FONT=Verdana]Why do I have to know anything about F@H's science to decide to apply my computing power to it. This is just like R. Silverman saying one should be fully versed on ECM material before asking questions about it. [/FONT][/COLOR] [COLOR=black][FONT=Verdana][/FONT][/COLOR] [COLOR=black][FONT=Verdana]Expecting a minimum knowledge requirement for a "click and be happy" project is moronic. NOT the other way around.[/FONT][/COLOR] [COLOR=black][FONT=Verdana][/FONT][/COLOR] [COLOR=black][FONT=Verdana]It is impossible for me to be [U]wrong[/U] about wanting to participate in [EMAIL="F@H"][COLOR=#0066cc]F@H[/COLOR][/EMAIL] since I stated nothing about the attributes of [EMAIL="F@H"][COLOR=#0066cc]F@H[/COLOR][/EMAIL] whatsoever. [/FONT][/COLOR] [COLOR=black][FONT=Verdana]What is there to be wrong about. Debating my worthiness to participate based on my knowledge is [U]not[/U] debate at all. It is called bullying or one of its many synonyms.[/FONT][/COLOR] [COLOR=black][FONT=Verdana][/FONT][/COLOR] [COLOR=black][FONT=Verdana]In addition, there is "debate" in the casual sense: To help others see a different point of view or modify a conclusion, and then there is "debate" designed to make the [U]opponent[/U] look stupid or inept. I rarely participate in the opponent level "debates" because there is nothing to be gained out of it. Look at the endless Daviddy debates. Did that near endless debate help the forum? No it did not because [U]YOU [/U]lost BCP19 out of it.[/FONT][/COLOR] [COLOR=black][FONT=Verdana][/FONT][/COLOR] [COLOR=black][FONT=Verdana]</end_last_post>[/FONT][/COLOR] [COLOR=black][FONT=Verdana][/FONT][/COLOR] [COLOR=black][FONT=Verdana][/FONT][/COLOR] |
[QUOTE=Aramis Wyler;339588]It may be a little beside the point of the argument, but the argument doesn't interest me half as much as the implication that the Stanford University research division running the folding@home project is either a complete sham or full of wankers spouting BS. I find this implication unusually surprising, and am curious why it exists?[/QUOTE]
[COLOR=black][FONT=Verdana]Since I never made disparaging remarks against F@H who is your question to?[/FONT][/COLOR] |
Batalov. You said that [EMAIL="folding@home"]folding@home[/EMAIL] was not full of wankers spouting BS, and I'm surprised he'd challenge a statement like that.
|
[QUOTE=swl551;339614]Why do I have to know anything about F@H's science to decide to apply my computing power to it. This is just like R. Silverman saying one should be fully versed on ECM material before asking questions about it.[/QUOTE]
I think you are mis-attributing comments. I never said you shouldn't join F@H. In fact, I've always said that people should be free to do whatever work they like with their own time, equipment and money. [QUOTE=swl551;339614]Look at the endless Daviddy debates. Did that near endless debate help the forum? No it did not because [U]YOU [/U]lost BCP19 out of it.[/QUOTE] Something I deeply regret. Although, that wasn't really a debate -- or even an argument. It was pointless trolling which I allowed to get out of control.... |
[QUOTE=Aramis Wyler;339617]Batalov. You said that [EMAIL="folding@home"]folding@home[/EMAIL] was not full of wankers spouting BS, and I'm surprised he'd challenge a statement like that.[/QUOTE]
No, Serge was referring to swl551's motivations. Anyone who doesn't like the level of BS here can try lighting a candle instead of cursing the darkness... |
:unsure:
|
[QUOTE=Aramis Wyler;339617]Batalov. You said that [EMAIL="folding@home"]folding@home[/EMAIL] was not full of wankers spouting BS, and I'm surprised he'd challenge a statement like that.[/QUOTE]
You are building a wrong comparison. If you compare FAH to GPU72 or GIMPS - neither of the three projects have "wankers spouting BS". They all have methodological and technical problems. Keeping yourself blind to these problems is entirely your choice. Educating yourself about them is another choice. Nobody's twisting your arm. Now, if you are building the contrast, apparently you are comparing GPU72 [u]forum[/u] with FAH [u]forum[/u]. Can you guarantee that on FAH forum there are no "wankers spreading b.s."? This is a very weak challenge. I assure you that there most probably are. (Unless there is no forum? I could be wrong, easily. See NFS @ Home forum. Clean as a whistle!) If there are "wankers spreading b.s." on FAH forum, would that be a good reason to immediately leave FAH? ...and go where? Human nature is the same anywhere. If there are "wankers spreading b.s." on mersenneforum, would that be a good reason to immediately leave GIMPS? If yes, why now? There were such beautiful entertainers around here before; take the colorful Don Blazys for example. (There were many more before him, too.) One choice would have been to ban D.B. after ten posts (like he was banned from twenty other forums). But the fun, the beautiful rainbows? The dude was an artist! Now, let's get back to simple matter of "wankers spreading b.s." here. That is the phrase I highlighted initially. They usually get banned. Ok, the system is not perfect and they are probably given more latitude than they deserve. But it is hypocritical to criticize "wankers spreading b.s.", while having a dirty mouth yourself: [QUOTE=swl551;339382]wankers spreading b.s. All the time.[/QUOTE] [QUOTE=swl551;339567]Does he really give two poops about anything I have ever done.... The answer would be obvious.[/QUOTE] "A stunning example" of '[I]"debate" in the casual sense: To help others see a different point of view or modify a conclusion[/I]'. Finally, "Debating my worthiness to participate based on my knowledge is [U]not[/U] debate at all." "Now listen to this." <- this is "telling what to do" "Have you thought about what the goals of the project are? What are its limitations?" <- these are questions, ahem, "[I]to help others see a different point of view[/I]". Have you assumed that your worthiness was challenged? You assumed wrongly. Mersenne @ home never had "[I]wankers spreading b.s.[/I]" Or maybe it did. Maybe this is how someone like you would see my questions there: "Why would you run these (given 10 examples) LL tests, when they have known factors?" If inconvenient posts were deleted, then it was probably a perfect forum. So by extension, that must have been a perfect project to join. Sure. Many people did. NFS @ home has an almost empty forum. Must be a perfect project to join. (and why not, actually) Not writing anything ever or killing all arguments in the bud by censorship is a sure way to keep any forum crispy clean. It would also be a very boring forum. Mike doesn't like it boring. ;-) |
heh, heh. Look, I haven't left, and I don't think leaving is a good idea. I see you're defending GIMPS forum there, and it makes me glad. I was just really surprised that when swl551 said that he was taking his toys and going folding@home where there were no wankers spouting BS, you didn't defend GIMPS, you said he wasn't qualified to make such a statement. Like maybe he was right, but so what because he had no justification to think that Stanford was putting out reasonable research projects or that their forums weren't full of crap. I was just like "?! Ha!" until you said you had done some coding work in the biology field, and then I thought "well shit, maybe he has some insider knowledge there and the folding@home project IS a sham, or maybe he failed out of stanford and has a grudge or doesn't like the competition or something." It was an interesting conversational twist.
I don't agree with swl here, though I'm not attacking him for his decisions any more than chasall is. I just thought the counter argument was more interesting than the argument itself, and was curious if there was something more there. |
I noticed this morning that my 480 was doing its TF work (to 74) on a 64.4M exponent. Surprised that we had gotten so far, I went and looked up the numbers but there were still lots in the 63.1M range only to 73. So I went on the get work page for lltf, set it to a low end of 63M, and the work projection said: Factor 64,439,xxx from 70 to 74. That was on "What makes sense". When I set it to "Lowest Exponent" though, it gives: Factor 63,142,xxx from 73 to 74. So that's about 1.3M numbers lower.
My question regards what makes the most sense, because I had been under the impression that doing the lowest exponent made the most sense. |
[QUOTE=Aramis Wyler;339670]My question regards what makes the most sense, because I had been under the impression that doing the lowest exponent made the most sense.[/QUOTE]
I wondered about that too. I have been getting quite a variety of work via the "Let GPU 72 decide" option. |
[QUOTE=Aramis Wyler;339670]My question regards what makes the most sense, because I had been under the impression that doing the lowest exponent made the most sense.[/QUOTE]
That is generally true, in that Primenet hands out LL work sorted by P1 desc, Exponent (grouped within each 1M range). However, with George's permission, GPU72 holds all assignments between 63M and 65M which are not already TFed to at least 74; everything below 63M is already at 73 (almost, there are still seven to do). Combined with the fact that we're now over two months ahead of the "wave", it doesn't matter if we complete the work slightly out of order. Thus, "What Makes Sense" has been set to be "Lowest TF Level", at least for a while, in order to give those who choose that option the opportunity to find more factors than they would if they were simply going from 73 to 74. But, as always, anyone who wants to do something different (like Lowest Exponent, Highest TF, etc) can simply choose those options explicitly. |
That's great, thanks for the info! :grin:
|
[QUOTE=chalsall;339694]But, as always, anyone who wants to do something different (like Lowest Exponent, Highest TF, etc) can simply choose those options explicitly.[/QUOTE]
I've got a really crackpot idea: Take expos (>62M and not yet assigned for LL) in ascending order, and TF them to 74. D |
[QUOTE=davieddy;339856]I've got a really crackpot idea:
Take expos (>62M and not yet assigned for LL) in ascending order, and TF them to 74.[/QUOTE] That doesn't Make Sense[SUP](TM)[/SUP] for the project overall, since Primenet still occasionally hands out assignments in the 62M range for LLing. But anyone interested is free to take candidates reserved for P-1'ing to TF to 74 in that range (and some do). What [I][U]might[/U][/I] Make Sense[SUP](TM)[/SUP] in a couple of months is to revisit 65M, and start taking it up to 75. The empirical data will tell us -- we haven't yet fully absorbed into the [URL="https://www.gpu72.com/reports/estimated_completion/primenet/"]automated analysis[/URL] the loss of BCP19. Discussion welcome. Hopefully from serious players. |
It's unfortunate several of our big players have quit the project. Perhaps we should do a recruiting drive or something?
|
[QUOTE=ixfd64;340475]It's unfortunate several of our big players have quit the project. Perhaps we should do a recruiting drive or something?[/QUOTE]
[URL="https://www.gpu72.com/reports/overall/graph/year/"]We haven't lost much in raw firepower though.[/URL] |
[QUOTE=ixfd64;340475]It's unfortunate several of our big players have quit the project.[/QUOTE]
They each had their reasons. And their efforts are greatly appreciated. To be perfectly honest, this kind of work can be very expensive. Hardware costs tend to pale in comparison to energy costs. [QUOTE=ixfd64;340475]Perhaps we should do a recruiting drive or something?[/QUOTE] I'm not sure that's needed. Several current players have added or upgraded "kit", we gain a few new players every month, and some who have "left" are still playing. But, of course, every cycle is welcome. |
My biggest iron (GTX 570) has been showing as "Repaired, Waiting for Shipping" at Gigabyte for about a week. I sure hope they get it in a box soon.
|
I have an idle GTX 580 lying on the shelf since I upgraded my main machine. What would be the cheapest off-the-shelf computer I could buy to plug it into? Don't care about CPU or memory.
|
[QUOTE=Chuck;340488]I have an idle GTX 580 lying on the shelf since I upgraded my main machine. What would be the cheapest off-the-shelf computer I could buy to plug it into? Don't care about CPU or memory.[/QUOTE]
How's this for starters: [URL]http://microcenter.com/product/406430/SX2855-UB12P_Desktop_Computer[/URL] ($249.99) (Unfortunately, when copying the specs I noticed that it has a 220w PSU. The 580 wants at least that much all to itself, doesn't it?) [QUOTE][B]Specifications[/B] [B]General Information[/B] Model Number SX2855-UB12P Lifestyle Home & Student Color Black [B] Operating System[/B] Operating System Microsoft Windows 7 Home Premium (64-bit) [B] Case & Motherboard[/B] Case Orientation Vertical Horizontal North Bridge Chipset Intel H61 Express [B] Processor[/B] CPU Brand Intel CPU Core Single-Core CPU Type Celeron CPU Speed G460 (1.8GHz) Smart Cache 1.5MB FSB 5 GT/s CPU Socket LGA 1155 CPUs Installed 1 CPUs Supported 1 CPU Main Features Virtualization Technology Hyper-Threading Technology Idle States Execute Disable Bit Thermal Monitoring Technologies Intel 64 Enhanced Intel Speedstep Technology Intel Fast Memory Access Intel Flex Memory Access Intel VT-x with Extended Page Tables (EPT) [B] Memory[/B] Total Memory 2GB Form Factor 240-pin DIMM Memory Configuration 1 x 2GB Memory Slots (Total) 2 Memory Slots (Available) 1 Maximum Memory Supported 8GB [B]Hard Drive[/B] HD Capacity 500GB HD Interface SATA HD Configuration 1 x 500GB [B] Multimedia Drives[/B] Optical Drives Included 1 Optical Drive DVDRW Optical Drive Specs 16x DVDRW SuperMulti Drive - Write Max: 16X DVDR, 6X DVD-RW, 8X DVD+RW, 8X DVDR DL, 5X DVD-RAM, 48X CD-R, 32X CD-RW, Read Max: 16X DVD-ROM, 40X CD-ROM [B]Display[/B] Display Type Display Not Included [B] Video[/B] GPU Type Intel HD Graphics 2000 Display Interface 1 x HDMI 1 x VGA [B]Audio[/B] Audio System High Definition Audio Audio Channels 5.1 [B]Communications[/B] LAN Gigabit LAN LAN Data Rate Speed 10/100/1000Mbps [B] Card Reader[/B] Card Reader Multi-in-1 Digital Media Card Reader Media Supported CompactFlash I CompactFlash II MicroDrive MMC RS-MMC Secure Digital Memory Stick Memory Stick PRO xD [B]Front Panel Ports[/B] USB 2.0 3 Audio 1 x headphone 1 x microphone [B] Back Panel Ports[/B] PS/2 2 VGA 15-pin 1 HDMI 1 USB 2.0 6 LAN RJ-45 1 Audio 3 [B] Expansion Bays[/B] External 5.25" Bays (Total) 1 External 5.25" Bays (Available) 0 Internal 3.5" Bays (Total) 1 Internal 3.5" Bays (Available) 0 [B] Expansion Slots[/B] PCIe x16 Slots (Total) 1 PCIe x16 Slots (Available) 1 PCIe x1 Slots (Total) 1 PCIe x1 Slots (Available) 1 [B]Keyboard[/B] Keyboard Multimedia [B] Mouse[/B] Mouse Optical [B]Power[/B] Power Supply 220 Watt [B] Physical Specifications[/B] Width 3.93" Depth 12.4" Height 10.43" Weight 12 lbs. Box Size 14" x 7" x 20" Shipping Weight 19 lbs. [B]What's in the Box[/B] What's in the Box GWSX2855-UB12P Desktop Computer, Gateway Multimedia Keyboard & Optical Mouse, AC Power Cord, Setup Poster, Registration / Limited Warranty Card, User's Guide, Hardware Reference Guide Preloaded Software Norton Internet Security 60-Day Trial Cyberlink PowerDVD Microsoft Office Starter 2010 Nero 10 Essentials [B] Manufacturer Warranty[/B] Parts 1 Year Labor 1 Year[/QUOTE] More afterthoughts: Cooling would not possibly be adequate. :no: |
[QUOTE=kladner;340489]How's this for starters:
[URL]http://microcenter.com/product/406430/SX2855-UB12P_Desktop_Computer[/URL] ($249.99) (Unfortunately, when copying the specs I noticed that it has a 220w PSU. The 580 wants at least that much all to itself, doesn't it?) More afterthoughts: Cooling would not possibly be adequate. :no:[/QUOTE] It wouldn't have the power plugs for the 580 anyways. |
[QUOTE]It wouldn't have the power plugs for the 580 anyways. [/QUOTE]
Another good point. Nothing in this price range is likely to, either. Off the shelf makes it pretty difficult to host a high-powered GPU for a lot of reasons. |
[QUOTE=kladner;340493]Another good point. Nothing in this price range is likely to, either. Off the shelf makes it pretty difficult to host a high-powered GPU for a lot of reasons.[/QUOTE]
Or build your own! I bet it won't take a lot to get a minimum pc with a good psu... |
[QUOTE=kracker;340496]Or build your own! I bet it won't take a lot to get a minimum pc with a good psu...[/QUOTE]
I agree, but that's just how I am. |
[QUOTE=kracker;340491]It wouldn't have the power plugs for the 580 anyways.[/QUOTE]
It doesn't have to be that cheap...I guess I could try building my own for the first time. That way could get a bigger PSU. Best place to go to get the components? |
[QUOTE=Chuck;340516]It doesn't have to be that cheap...I guess I could try building my own for the first time. That way could get a bigger PSU. Best place to go to get the components?[/QUOTE]
NewEgg is always worth a look, and Micro Center isn't bad either. [url]http://www.newegg.com/[/url] [url]http://www.microcenter.com/[/url] I used to get a lot of stuff from Newegg. Lately, I've found it worth paying sales taxes to buy from the local Microcenter because it is a short trip to return/exchange things, as opposed to shipping. Their policies on such things are quite liberal, too. You might check out Fry's, too. |
[QUOTE=kladner;340518]NewEgg is always worth a look, and Micro Center isn't bad either.
[url]http://www.newegg.com/[/url] [url]http://www.microcenter.com/[/url] I used to get a lot of stuff from Newegg. Lately, I've found it worth paying sales taxes to buy from the local Microcenter because it is a short trip to return/exchange things, as opposed to shipping. Their policies on such things are quite liberal, too. You might check out Fry's, too.[/QUOTE] Newegg +1 If you need help with building or selecting parts.... :smile: |
Check for motherboard/CPU/etc. bundle deals.
|
Depending on your budget, wait for a Haswell CPU due in three weeks.
|
[QUOTE=ixfd64;340475]It's unfortunate several of our big players have quit the project. Perhaps we should do a recruiting drive or something?[/QUOTE]
In the long run, this really doesn't matter that much. In the history of GIMPS, there have been quite a few enthusiasts who contributed a relatively large computing effort for a year or two and then moved on. There have also been those who contributed a more modest amount, but have been with the project almost since the beginning. I suspect that the former group is more motivated by the prospect of actually discovering a new record Mersenne prime personally, while the latter group is more motivated by the feeling of contributing to a group effort that has been successful again and again, but there is probably a mixture of motivations in both groups. Let me just say that everyone who contributed to the last prime discovery is awesome! To my mind, this project is currently in a transition mode. We certainly might discover another prime or two under the current model, but eventually, these numbers will get so large that GPUs become a more attractive alternative to the traditional CPU based computation. My CPU can test a 100,000,000 digit number in about 2.5 years, but a Titan GPU can probably do that test in about 2.5 months. I admire the optimism of the Uncwilly crowd in pursuing this goal, but realistically, I think that the discovery of a 100,000,000 digit Mersenne prime is probably still two decades away. But I don't want to disparage their efforts, because it makes an attractive sub-project that enhances the profile of the overall search. (And who knows, they could beat the odds and find that large prime tomorrow!) My philosophy, however, is that success breeds more success, so I think that the effort at the current leading edge of exponent testing contributes the most to the long-term health of this project. On the other hand, we have witnessed again and again that disparaging comments on others' efforts do seem to produce some discouragement and drag on the project. I am not so concerned with bcp's recent departure, because I suspect that contributor might not have been with us much longer anyway, in spite of receiving generous words of support from other forum members. But I do think it is important to keep in mind that this is a [B]collaborative[/B] project, and that the more we set a positive, supportive tone, the more productive that collaboration will be in the long run. |
Bravo! :goodposting:
|
You do have to appreciate the penetration rate of usable cpus vs usable gpus though - to date the cpus are still doing more work toward prime95 than the gpus - they're like the zerg!
|
small discrrepancy between GPU72 and PrimeNet
My GPU72 LL participation is new ... And my PrimeNet LL participation is back after about 18 months away.
And all my LL work is being done under GPU72. This makes for a clean comparison of LL work and credits. Both systems list 115 completions ... Check PrimeNet has me at 9980.6966 GhzDays ... GPU72 9994.822. Granted well under a percent different but it might be something you choose to look into when you are bored. |
[QUOTE=petrw1;340561]Granted well under a percent different but it might be something you choose to look into when you are bored.[/QUOTE]
I'm simply using PHP code provided by James (derived from C code provided by George) [URL="http://www.gpu72.com/software/Timings1_0.pm"]converted to Perl[/URL] for the credits and estimates. It is possible there's a slight discrepancy between this and what PrimeNet uses. I would say that a direct comparison between PrimeNet and GPU72 should not be expected to be exact (although it should be close); PrimeNet is canonical. |
primenet only gives partial credit if you submit a factor, but gpu72 will give you the given number of GHzDs credit whether you ran the whole 3 hour job w/o a factor or found a factor inside 5 minutes. Also sometimes I think primenet misinterprets my TF factors as some other type of work.
So there are some differences, though they are more minor with LL work than factoring or p-1. Depends on how many bits of factoring/p-1 you get with your LLs, maybe. |
[QUOTE=Aramis Wyler;340569]primenet only gives partial credit if you submit a factor, but gpu72 will give you the given number of GHzDs credit whether you ran the whole 3 hour job w/o a factor or found a factor inside 5 minutes. Also sometimes I think primenet misinterprets my TF factors as some other type of work.
So there are some differences, though they are more minor with LL work than factoring or p-1. Depends on how many bits of factoring/p-1 you get with your LLs, maybe.[/QUOTE] Credit is totally same no matter how much P-1 or TF you do on it. EDIT: On LL tests, of course. |
[QUOTE=Aramis Wyler;340569]primenet only gives partial credit if you submit a factor, but gpu72 will give you the given number of GHzDs credit whether you ran the whole 3 hour job w/o a factor or found a factor inside 5 minutes.[/QUOTE]
Yes. I had asked James how to deal with this, and he explained (IIRC) that Prime95/mprime searches "bottom up", and mfakt* searches "top down". I decided that since finding factors was our goal, there wouldn't be great harm in giving the Worker credit for the full "bit" range in such cases (to be clear, if someone pledged to TF to 74, but they found a factor at 71.x, they get credit for TFing to 72). Plus, it made my job easier! :wink: [QUOTE=Aramis Wyler;340569]Also sometimes I think primenet misinterprets my TF factors as some other type of work.[/QUOTE] A known bug, which has a known workaround -- submit a no-factor found result first (even if the result has already been submitted). Also, I believe James is working on this on PrimeNet. [QUOTE=Aramis Wyler;340569]So there are some differences, though they are more minor with LL work than factoring or p-1. Depends on how many bits of factoring/p-1 you get with your LLs, maybe.[/QUOTE] Estimates and Credits for P-1 work is going to be where people see the most discrepancy. When GPU72 gives "Estimated GHz Days" on the assignment form, it can't know what B1 or B2 values are going to be used. Once it observes this work completed, it can't (without extra work, and only in the case of a "no factor found" result) determine what B1 or B2 values were used. IMO, at the end of the day, this doesn't really matter. We're here to do work. And the rankings between individuals (when viewed within the context of GPU72 [B][I][U]or[/U][/I][/B] PrimeNet) are using the same metrics. |
[QUOTE=chalsall;340575]Yes. I had asked James how to deal with this, and he explained (IIRC) that Prime95/mprime searches "bottom up", and mfakt* searches "top down".[/QUOTE]On [url]www.mersenne.ca[/url] I've now implemented a tweaked credit calculation that takes into account what class mfakt* would find the factor in, which adds a third possibility into the mix. :smile:
|
Also, James estimates LL CPU credit based on the FFT size it expects prime95 to use. Prime95 may actually choose a different FFT size. When prime95 reports an LL result it also reports the FFT size used which Primenet uses to calculate CPU credit. There is no way for GPU72 to get this FFT size info to match Primenet's credit calculations.
|
[QUOTE=Prime95;340525]Depending on your budget, wait for a Haswell CPU due in three weeks.[/QUOTE]
Shall we take it as a hint that you´re planning some new improvements for Prime95 on this architecture? Due to AVX2, or FMA3, or...? |
[QUOTE=lycorn;340615]Shall we take it as a hint that you´re planning some new improvements for Prime95 on this architecture? Due to AVX2, or FMA3, or...?[/QUOTE]
FMA3 should provide a performance boost. I can't quantify it yet. It could be a lot as Haswell doubles the theoretical peak FLOPs rate --- or it might be small because we are still memory bandwidth limited. This has been discussed somewhat in the Haswell thread. |
[QUOTE=davieddy;339856]I've got a really crackpot idea:
Take expos (>62M and not yet assigned for LL) in ascending order, and TF them to 74.[/QUOTE] As a demonstration that I don't dogmatically reject your suggestions, I had a deep think about this, [URL="https://www.gpu72.com/reports/estimated_completion/primenet/"]ran some numbers[/URL], and concluded that we can and should do this. (Although, I do find it a bit ironic that this suggestion came from you, after the all the "debating" about how we couldn't sustain taking 63M and above to 74 bits...) So everyone knows, "What Makes Sense" is now "Lowest Exponent" to 74, starting from 62M. The LL P-1 form (and proxy) will now only assign work which is already TFed to at least 74 bits. To be fair to P-1 workers who get their assignments directly from Primenet, most of the ~6,000 candidates in the 62M range TFed to "only" 73 bits will remain with Primenet until we've returned enough TFed to 74 to satisfy their requests for work (should be about a week). In order to not starve Primenet's LL workers, those candidates (~3,500) with a P-1 run completed will remain with Primenet for the time being. We can decide if we want to bring those in for processing once enough candidates that have been TFed to 74 and P-1'ed are available to satisfy the request load. As always, comments welcome. |
[QUOTE=chalsall;340692]... most of the ~6,000 candidates in the 62M range TFed to "only" [color=red]63 bits[/color] will remain with Primenet until we've returned enough TFed to [color=red]64[/color] to satisfy their requests for work (should be about a week).[/QUOTE]Presumably that was supposed to be "73 bits" and "74" respectively?
|
[QUOTE=James Heinrich;340695]Presumably that was supposed to be "73 bits" and "74" respectively?[/QUOTE]
Yeah.... :blush: ("Dyslexics of the world! Untie!") Corrected. Thanks for pointing that out. |
[QUOTE=chalsall;340692]
So everyone knows, "What Makes Sense" is now "Lowest Exponent" to 74, starting from 62M. The LL P-1 form (and proxy) will now only assign work which is already TFed to at least 74 bits. [/QUOTE] Well, could you maybe think about this again? One of my machines has picked up 2 new P-1 assignments that were even TF'd to 77! Unfortunately in the 332M range :ick:. I don't think this machine will handle stage 2 on those very well. And I'd triple GPU72's expectation of completion in ~180 days ... But if we finished all P-1 in the LL range, then maybe this is what now MS[SUP]TM[/SUP]. |
lol, those dirty insubordinate computers never do what Chalsall means! They don't think! Sheesh! :grin:
|
[QUOTE=Aramis Wyler;340732]lol, those dirty insubordinate computers never do what Chalsall means! They don't think! Sheesh! :grin:[/QUOTE]
Let's hope they are not planning mutiny against us. |
[QUOTE=Bdot;340726]Well, could you maybe think about this again? One of my machines has picked up 2 new P-1 assignments that were even TF'd to 77! Unfortunately in the 332M range :ick:.[/QUOTE]
Oh crap!!! Sorry. Seriously stupid programmer error! I changed the sort order for LL P-1 to be FactTo desc, forgetting about the recently added 332M range... No good deed goes unpunished. Fixed. Please feel free to throw those back. Edit: Actually, there's a way I can automatically call these (seven) assignments back. I set the AIDs to be "". When the clients report the estimated completion, they'll get an "Invalid Assignment Key" from Primenet, and will not start (or will stop) work on them. Sorry again for this fsck-up. |
[QUOTE=kracker;340734]Let's hope they are not planning mutiny against us.[/QUOTE]
LOL... "The Skynet Funding Bill is passed. The system goes on-line August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug. |
[QUOTE=chalsall;340737]
Edit: Actually, there's a way I can automatically call these (seven) assignments back. I set the AIDs to be "". When the clients report the estimated completion, they'll get an "Invalid Assignment Key" from Primenet, and will not start (or will stop) work on them. Sorry again for this fsck-up.[/QUOTE] This has worked, thanks for cleaning this up. |
[QUOTE=Bdot;340824]This has worked, thanks for cleaning this up.[/QUOTE]
Thanks for bringing the problem to my attention so quickly. And for confirming that the "fix" (never before tested) worked. |
[QUOTE=davieddy;339856]I've got a really crackpot idea:
Take expos (>62M and not yet assigned for LL) in ascending order, and TF them to 74. D[/QUOTE] [QUOTE=chalsall;340294]That doesn't Make Sense[SUP](TM)[/SUP] for the project overall, since Primenet still occasionally hands out assignments in the 62M range for LLing. But anyone interested is free to take candidates reserved for P-1'ing to TF to 74 in that range (and some do). What [I][U]might[/U][/I] Make Sense[SUP](TM)[/SUP] in a couple of months is to revisit 65M, and start taking it up to 75. The empirical data will tell us -- we haven't yet fully absorbed into the [URL="https://www.gpu72.com/reports/estimated_completion/primenet/"]automated analysis[/URL] the loss of BCP19. Discussion welcome. Hopefully from serious players.[/QUOTE] [QUOTE=chalsall;340692]As a demonstration that I don't dogmatically reject your suggestions, I had a deep think about this, [URL="https://www.gpu72.com/reports/estimated_completion/primenet/"]ran some numbers[/URL], and concluded that we can and should do this. (Although, I do find it a bit ironic that this suggestion came from you, after the all the "debating" about how we couldn't sustain taking 63M and above to 74 bits...) So everyone knows, "What Makes Sense" is now "Lowest Exponent" to 74, starting from 62M. The LL P-1 form (and proxy) will now only assign work which is already TFed to at least 74 bits. [/QUOTE] I love the way you operate Chris: Take the absurd from my "reductio ad absurdum" argument and run with it! At least it makes for a good experiment. Let's see how it settles down in a month or so. Meantime, would you please remove this restraint whereby my posts are delayed to the extent that editing and real time discussion are imossible? D |
I don't know about other "webmasters", but GPU72 (and all of my other publicly facing sites) has been inundated lately by Microsoft's "bingbot" spiders.
This might have something to do with M$'s recent advertising campaign about how "Bing is better". On a cost/benefit basis, this doesn't make sense to me, since I get only trivial amounts of incoming traffic from Bing, while I get lots from Google. For the record, I have blocked bingbot from accessing my various sites because of documented unfriendly requests (four a second!?!?!?). [CODE][20/May/2013:19:01:04 -0400] fr.gpu72.com 157.56.229.185 - - "GET /reports/worker/d472d38f268449b8d9d38488bdf89820/ HTTP/1.1" 403 327 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" [20/May/2013:19:01:04 -0400] fr.gpu72.com 157.56.229.185 - - "GET /reports/worker/d9dc41bc1e310a202e4ff89dd6f74d4f/ HTTP/1.1" 403 327 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" [20/May/2013:19:01:04 -0400] fr.gpu72.com 157.56.229.185 - - "GET /reports/worker/dd68e6103188b11290a8a04d288b56ce/ HTTP/1.1" 403 327 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" [20/May/2013:19:01:04 -0400] fr.gpu72.com 157.56.229.185 - - "GET /reports/worker/f4723379fe039673c88e4e8e47e48fed/ HTTP/1.1" 403 327 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" [20/May/2013:19:01:06 -0400] fr.gpu72.com 157.56.229.185 - - "GET /reports/worker/factors/047701d1e335849897e8b564967893b3/ HTTP/1.1" 403 335 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" [20/May/2013:19:01:06 -0400] fr.gpu72.com 157.56.229.185 - - "GET /reports/worker/factors/067ccb9c43297464d2310d6c19589acb/bycredit/ HTTP/1.1" 403 344 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" [20/May/2013:19:01:06 -0400] fr.gpu72.com 157.56.229.185 - - "GET /reports/worker/factors/08332c96355eaf6f2b369c8dc9b29568/ HTTP/1.1" 403 335 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" [20/May/2013:19:01:06 -0400] fr.gpu72.com 157.56.229.185 - - "GET /reports/worker/factors/143471536f544b17590aad4023de7132/byexponent/ HTTP/1.1" 403 346 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" [20/May/2013:19:01:07 -0400] fr.gpu72.com 157.56.229.185 - - "GET /reports/worker/factors/15aa325b9d9ee716c825070059f2337a/bycredit/ HTTP/1.1" 403 344 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" [20/May/2013:19:01:07 -0400] fr.gpu72.com 157.56.229.185 - - "GET /reports/worker/factors/15aa325b9d9ee716c825070059f2337a/byexponent/ HTTP/1.1" 403 346 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" [20/May/2013:19:01:07 -0400] fr.gpu72.com 157.56.229.185 - - "GET /reports/worker/factors/19437658c4ba52abaa8aacbb8da29007/byexponent/ HTTP/1.1" 403 346 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" [20/May/2013:19:01:07 -0400] fr.gpu72.com 157.56.229.185 - - "GET /reports/worker/factors/1a660fdf1f559b6df4f6f46ddbbbc5cf/ HTTP/1.1" 403 335 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" [20/May/2013:19:01:08 -0400] fr.gpu72.com 157.56.229.185 - - "GET /reports/worker/factors/20b4360ad24d3ec3169651b1355e366b/byexponent/ HTTP/1.1" 403 346 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" [20/May/2013:19:01:08 -0400] fr.gpu72.com 157.56.229.185 - - "GET /reports/worker/factors/243c958917060d5be14181a224a49383/byexponent/ HTTP/1.1" 403 346 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" [20/May/2013:19:01:08 -0400] fr.gpu72.com 157.56.229.185 - - "GET /reports/worker/factors/24d5f681f3717894683da9313d933c13/ HTTP/1.1" 403 335 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" [20/May/2013:19:01:08 -0400] fr.gpu72.com 157.56.229.185 - - "GET /reports/worker/factors/2877aa0e1c37b72ea13b99ae1f87e5f9/ HTTP/1.1" 403 335 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" [20/May/2013:19:01:09 -0400] fr.gpu72.com 157.56.229.185 - - "GET /reports/worker/factors/29cc984f9602dca466dead7f279c17e1/ HTTP/1.1" 403 335 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" [20/May/2013:19:01:09 -0400] fr.gpu72.com 157.56.229.185 - - "GET /reports/worker/factors/29cc984f9602dca466dead7f279c17e1/bydate/ HTTP/1.1" 403 342 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" [20/May/2013:19:01:09 -0400] fr.gpu72.com 157.56.229.185 - - "GET /reports/worker/factors/29cc984f9602dca466dead7f279c17e1/byexponent/ HTTP/1.1" 403 346 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)" [20/May/2013:19:01:09 -0400] fr.gpu72.com 157.56.229.185 - - "GET /reports/worker/factors/352a6b2d64850085d5122c8b4fcce4c6/bycredit/ HTTP/1.1" 403 344 "-" "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)"[/CODE] [CODE][chalsall@burrow ~]$ dig -x 157.56.229.185 ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.10.rc1.el6_3.6 <<>> -x 157.56.229.185 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 32339 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;185.229.56.157.in-addr.arpa. IN PTR ;; ANSWER SECTION: 185.229.56.157.in-addr.arpa. 2768 IN PTR msnbot-157-56-229-185.search.msn.com. [/CODE] |
I just checked today's logs for [url]www.mersenne.ca[/url] and I don't see any undue bingbot activity, and nothing of the unfriendly hammering you quote above.
But totally agree on the referral bias. For May 2013 (so far), the referral count: Google: 1722 Bing: 10 |
[QUOTE=James Heinrich;341058]I just checked today's logs for [url]www.mersenne.ca[/url] and I don't see any undue bingbot activity, and nothing of the unfriendly hammering you quote above.
But totally agree on the referral bias. For May 2013 (so far), the referral count: Google: 1722 Bing: 10[/QUOTE] I excluded BING (and Yahoo and Altavista) spiders from DoubleMersennes since the start. I guess that Google does a good job, and whoever comes from the other search engines doesn't really need my site :smile: Luigi |
At the risk of [url=http://www.mersenneforum.org/showpost.php?p=341112&postcount=1886]cross-posting[/url], I'll repeat my request here, since it's relevant to discussions in this thread.
It has been brought to my attention that my [url=http://www.mersenne.ca/cudalucas.php]CUDALucas throughput page[/url] is actually quite inaccurate, leading to inaccurate crossover point estimates. Therefore can I please request that everyone please run a quick benchmark for me, I'd like to validate (and update) the lookup table I use. Please run this simple benchmark on a wide variety of GPUs you have available and email the results to [email]james@mersenne.ca[/email] (or PM me here if you prefer).[code]CUDALucas -info -cufftbench 1048576 8388608 1048576[/code] |
Also, please run the benchmark with CUDALucas v2.04 if possible.
|
[QUOTE=James Heinrich;341129]Also, please run the benchmark with CUDALucas v2.04 if possible.[/QUOTE]
For GTX580 and GTX570, the benchmarks on your site look [B]perfectly accurate for me[/B] (adjusting the numbers for the default clock, as my water-cooled rigs are overclocked, I mean, they come overclocked from the factory, like 781MHz for Asus' gtx580, for example, but I overclock them more in RL, like 820, 850, etc, depending of how hot is outside, and using the cudaLucas's default FFT lengths, because a small speed increase can be acquired by fine-tuning that FFT). In fact, what you really need there would be a column with the clock at which the numbers were taken, as the results are different for different clock speeds. |
[QUOTE=LaurV;341201]For GTX580 and GTX570, the benchmarks on your site look [B]perfectly accurate for me[/B][/QUOTE]Benchmarks on my site are for default clock speeds for that GPU (e.g. 732MHz for GTX 570). [url=http://en.wikipedia.org/wiki/GTX_570#GeForce_500_.285xx.29_series]Wikipedia[/url] has a convenient list for default clock speeds.
I would appreciate your benchmark results (for as many different GPU families as you have), since the data from my own GTX 570 (and other users' results) show significant deviation from my posted benchmark data. |
OK then...
[CODE]e:\CudaLucas\CL1>cl204b4020x64 -d 1 -info -cufftbench 1048576 8388608 1048576 ------- DEVICE 1 ------- name GeForce GTX 580 totalGlobalMem 1610416128 sharedMemPerBlock 49152 regsPerBlock 32768 warpSize 32 memPitch 2147483647 maxThreadsPerBlock 1024 maxThreadsDim[3] 1024,1024,64 maxGridSize[3] 65535,65535,65535 totalConstMem 65536 Compatibility 2.0 clockRate (MHz) 1646 textureAlignment 512 deviceOverlap 1 multiProcessorCount 16 CUFFT bench start = 1048576 end = 8388608 distance = 1048576 CUFFT_Z2Z size= 1048576 time= 0.508452 msec CUFFT_Z2Z size= 2097152 time= 1.031502 msec CUFFT_Z2Z size= 3145728 time= 1.925885 msec CUFFT_Z2Z size= 4194304 time= 2.622479 msec CUFFT_Z2Z size= 5242880 time= 3.118799 msec CUFFT_Z2Z size= 6291456 time= 3.944277 msec CUFFT_Z2Z size= 7340032 time= 4.284509 msec CUFFT_Z2Z size= 8388608 time= 5.400013 msec e:\CudaLucas\CL1>cl204b4020x64 -d 1 -info -cufftbench 1048576 8388608 1048576 ------- DEVICE 1 ------- name GeForce GTX 580 totalGlobalMem 1610416128 sharedMemPerBlock 49152 regsPerBlock 32768 warpSize 32 memPitch 2147483647 maxThreadsPerBlock 1024 maxThreadsDim[3] 1024,1024,64 maxGridSize[3] 65535,65535,65535 totalConstMem 65536 Compatibility 2.0 clockRate (MHz) 1564 textureAlignment 512 deviceOverlap 1 multiProcessorCount 16 CUFFT bench start = 1048576 end = 8388608 distance = 1048576 CUFFT_Z2Z size= 1048576 time= 0.535127 msec CUFFT_Z2Z size= 2097152 time= 1.085725 msec CUFFT_Z2Z size= 3145728 time= 2.021942 msec CUFFT_Z2Z size= 4194304 time= 2.746189 msec CUFFT_Z2Z size= 5242880 time= 3.256758 msec CUFFT_Z2Z size= 6291456 time= 4.151508 msec CUFFT_Z2Z size= 7340032 time= 4.532980 msec CUFFT_Z2Z size= 8388608 time= 5.721727 msec e:\CudaLucas\CL1>[/CODE] |
Thanks. I can no longer edit my post, but my [url=http://www.mersenne.ca/cudalucas.php]benchmark request[/url] now includes a request for the first 20000 iterations of[code]CUDALucas 57885161[/code]
|
It would be nice, if there was a graph for Dc and LL, along with P-1 in the monthly, weekly graphs, etc.:smile:
|
[QUOTE=kracker;342139]It would be nice, if there was a graph for Dc and LL, along with P-1 in the monthly, weekly graphs, etc.:smile:[/QUOTE]
Good point! The LL and DC work types were a bit of an afterthought -- I'd actually not even realized that perhaps graphing them might be interesting. I've just taken delivery of seven new servers I need to configure for a client, so this new graph won't be ready for a couple of weeks. But please consider it on my "ToDo" list. |
[QUOTE=James Heinrich;341994]..request for the first 20000 iterations of[code]CUDALucas 57885161[/code][/QUOTE]
Here you are, I had to run it a couple of times, till I realized that the number of iterations is set much higher in the ini file, nothing came out for 20k iterations :smile:, then that "polite" switch was wrong, then I had to delete the checkpoints between runs, etc, hehe.. well... I am aging... (I did not want to change my ini files, so I gave cmd line parameters). Therefore the rows with "iteration 30k" and "40k" (last two rows for each test) contain the correct timing (because row "20k" was runned partially with "impolite" switch, till I changed it). The last test is a bit of FFT "tunning", the program selects quite a bad FFT for this expo. On gtx580, the 3160 is much faster then 3200 (even faster then 3072, no joke!) [CODE] >cl204b4020x64 -info -d 1 -c 10000 57885161 ------- DEVICE 1 ------- name GeForce GTX 580 totalGlobalMem 1610416128 sharedMemPerBlock 49152 regsPerBlock 32768 warpSize 32 memPitch 2147483647 maxThreadsPerBlock 1024 maxThreadsDim[3] 1024,1024,64 maxGridSize[3] 65535,65535,65535 totalConstMem 65536 Compatibility 2.0 [COLOR=Red]clockRate (MHz) 1564[/COLOR] <<<this is the default, the card is factory OC to 782MHz by Asus textureAlignment 512 deviceOverlap 1 multiProcessorCount 16 mkdir: cannot create directory `backup1': File exists Starting M57885161 fft length = 2880K Running careful round off test for 1000 iterations. If average error >= 0.25, the test will restart with a larger FFT length. Iteration = 32 < 1000 && err = 0.50000 >= 0.35, increasing n from 2880K Starting M57885161 fft length = 3072K Running careful round off test for 1000 iterations. If average error >= 0.25, the test will restart with a larger FFT length. Iteration = 80 < 1000 && err = 0.35156 >= 0.35, increasing n from 3072K Starting M57885161 fft length = 3200K Running careful round off test for 1000 iterations. If average error >= 0.25, the test will restart with a larger FFT length. Iteration 100, average error = 0.08149, max error = 0.11719 Iteration 200, average error = 0.09383, max error = 0.11719 Iteration 300, average error = 0.09701, max error = 0.11719 Iteration 400, average error = 0.09848, max error = 0.10938 Iteration 500, average error = 0.09981, max error = 0.11719 Iteration 600, average error = 0.10009, max error = 0.11719 Iteration 700, average error = 0.10052, max error = 0.10938 Iteration 800, average error = 0.10090, max error = 0.11328 Iteration 900, average error = 0.10110, max error = 0.10938 Iteration 1000, average error = 0.10130 < 0.25 (max error = 0.12500), continuing test. Iteration 10000 M( 57885161 )C, 0x76c27556683cd84d, n = 3200K, CUDALucas v2.04 Beta err = 0.1250 (1:05 real, 6.5045 ms/iter, ETA 104:33:36) p -polite 0 Iteration 20000 M( 57885161 )C, 0xfd8e311d20ffe6ab, n = 3200K, CUDALucas v2.04 Beta err = 0.1328 (0:58 real, 5.7869 ms/iter, ETA 93:00:29) Iteration 30000 M( 57885161 )C, 0xce0d85ab0065a232, n = 3200K, CUDALucas v2.04 Beta err = 0.1289 (0:57 real, 5.6789 ms/iter, ETA 91:15:23) Iteration 40000 M( 57885161 )C, 0x6746379dfc966410, n = 3200K, CUDALucas v2.04 Beta err = 0.1328 (0:57 real, 5.6825 ms/iter, ETA 91:17:54) SIGINT caught, writing checkpoint. Estimated time spent so far: 4:32 >cl204b4020x64 -info -d 1 -c 10000 57885161 ------- DEVICE 1 ------- <... snip values same as above test...> [COLOR=Red]clockRate (MHz) 1646[/COLOR] <... snip values same as above test...> Iteration 900, average error = 0.10110, max error = 0.10938 Iteration 1000, average error = 0.10130 < 0.25 (max error = 0.12500), continuing test. p -polite 0 Iteration 10000 M( 57885161 )C, 0x76c27556683cd84d, n = 3200K, CUDALucas v2.04 Beta err = 0.1250 (0:56 real, 5.6304 ms/iter, ETA 90:30:33) Iteration 20000 M( 57885161 )C, 0xfd8e311d20ffe6ab, n = 3200K, CUDALucas v2.04 Beta err = 0.1328 (0:54 real, 5.3918 ms/iter, ETA 86:39:27) Iteration 30000 M( 57885161 )C, 0xce0d85ab0065a232, n = 3200K, CUDALucas v2.04 Beta err = 0.1289 (0:54 real, 5.3914 ms/iter, ETA 86:38:10) Iteration 40000 M( 57885161 )C, 0x6746379dfc966410, n = 3200K, CUDALucas v2.04 Beta err = 0.1328 (0:54 real, 5.3881 ms/iter, ETA 86:34:10) SIGINT caught, writing checkpoint. Estimated time spent so far: 3:44 >cl204b4020x64 -info -d 1 -c 10000 -f 3136k 57885161 <... snip values same as above test...> [COLOR=Red]clockRate (MHz) 1646[/COLOR] <... snip values same as above test...> Starting M57885161 fft length = 3136K Running careful round off test for 1000 iterations. If average error >= 0.25, the test will restart with a larger FFT length. Iteration 100, average error = 0.15533, max error = 0.22656 Iteration 200, average error = 0.18332, max error = 0.23438 Iteration 300, average error = 0.19044, max error = 0.21875 Iteration 400, average error = 0.19509, max error = 0.22803 Iteration 500, average error = 0.19776, max error = 0.23438 Iteration 600, average error = 0.19979, max error = 0.23438 Iteration 700, average error = 0.20043, max error = 0.23438 Iteration 800, average error = 0.20119, max error = 0.22461 Iteration 900, average error = 0.20133, max error = 0.22656 Iteration 1000, average error = 0.20198 < 0.25 (max error = 0.21875), continuing test. p -polite 0 Iteration 10000 M( 57885161 )C, 0x76c27556683cd84d, n = 3136K, CUDALucas v2.04 Beta err = 0.2578 (0:55 real, 5.4927 ms/iter, ETA 88:17:43) t [COLOR=Red] disabling -t[/COLOR] <<<<grrr! I forgot this, I always keep it enabled... I don't have time now to repeat the tests, sorry! Iteration 20000 M( 57885161 )C, 0xfd8e311d20ffe6ab, n = 3136K, CUDALucas v2.04 Beta err = 0.2539 (0:50 real, 5.0379 ms/iter, ETA 80:58:12) Iteration 30000 M( 57885161 )C, 0xce0d85ab0065a232, n = 3136K, CUDALucas v2.04 Beta err = 0.2344 (0:50 real, 4.9700 ms/iter, ETA 79:51:55) Iteration 40000 M( 57885161 )C, 0x6746379dfc966410, n = 3136K, CUDALucas v2.04 Beta err = 0.2178 (0:50 real, 4.9710 ms/iter, ETA 79:52:01) SIGINT caught, writing checkpoint. Estimated time spent so far: 3:32 > [/CODE][edit: grrrr... the -t switch still adds few percents to the results... I always keep it enabled (better safe than sorry) so I forgot to disable it! I hope you don't mind, the SWMBO is pushing me to go lunch and shopping (blearh!), no time to run the tests again!] |
[QUOTE=chalsall;340692]So everyone knows, "What Makes Sense" is now "Lowest Exponent" to 74, starting from 62M. The LL P-1 form (and proxy) will now only assign work which is already TFed to at least 74 bits.
To be fair to P-1 workers who get their assignments directly from Primenet, most of the ~6,000 candidates in the 62M range TFed to "only" 73 bits will remain with Primenet until we've returned enough TFed to 74 to satisfy their requests for work (should be about a week). In order to not starve Primenet's LL workers, those candidates (~3,500) with a P-1 run completed will remain with Primenet for the time being. We can decide if we want to bring those in for processing once enough candidates that have been TFed to 74 and P-1'ed are available to satisfy the request load. As always, comments welcome.[/QUOTE] Does this mean that an exponent is trial factored to some bit level (I saw a table somewhere a while ago detailing how far to go for a range of exponents), then P-1'ed and THEN released for LL testing? Part of me feels like not enough P-1 gets done to keep up with all the LL-testing. EDIT: Or does "What Makes Sense" take care of it if it becomes an issue? I don't know that I've ever seen "What makes sense" in Prime95 ever generate anything other than LL-tests. |
[QUOTE=TheMawn;343339]Does this mean that an exponent is trial factored to some bit level (I saw a table somewhere a while ago detailing how far to go for a range of exponents), then P-1'ed and THEN released for LL testing?[/QUOTE]
Yes. [QUOTE=TheMawn;343339]Part of me feels like not enough P-1 gets done to keep up with all the LL-testing.[/QUOTE] I am happy to say that the "deep" P-1'ing is very slightly more than keeping up with the LLing. We're "riding the wave".... :smile: |
[QUOTE=chalsall;343425]We're "riding the wave".... :smile:[/QUOTE]
... and heading for a wipeout. |
[QUOTE=chalsall;343425]I am happy to say that the "deep" P-1'ing is very slightly more than keeping up with the LLing.
We're "riding the wave".... :smile:[/QUOTE] Ah! Glad to hear it. I know it's kind of tempting to stick to the more glamourous first time LL tests since that's where all the money and glory is. Hopefully my old first gen i3 is doing my share of the P-1. On a different topic, is there any way to see how much trial factoring is being done through a GPU versus through a CPU, now that GPU's are frankly embarassing CPU's? |
[QUOTE=TheMawn;343447]On a different topic, is there any way to see how much trial factoring is being done through a GPU versus through a CPU, now that GPU's are frankly embarassing CPU's?[/QUOTE]
I don't have access to enough data from Primenet to be able to answer for only GPU vs. CPU. But it is fairly safe to say that [URL="http://www.mersenne.info/trial_factored_tabular_delta_7/2/60000000/"]this report[/URL] represents mostly GPU efforts below 67M. |
[QUOTE=TheMawn;343447]Ah! Glad to hear it. I know it's kind of tempting to stick to the more glamourous first time LL tests since that's where all the money and glory is. Hopefully my old first gen i3 is doing my share of the P-1.
On a different topic, is there any way to see how much trial factoring is being done through a GPU versus through a CPU, now that GPU's are frankly embarassing CPU's?[/QUOTE] One sign to of this is the (lack of) progress of TF-LMH (Doing TF to 66 from 100M to 999M). I have been casually observing it for a few years. The rate of progress from 65 to 66 is MUCH MUCH less than half the expected rate of progress last round from 64 to 65. I think it is a combination of many of the older PC that were doing this have simply died ... but also many people are getting GPUs and can now do TF in the needed ranges instead of 100M+. |
[QUOTE=petrw1;343464]One sign to of this is the (lack of) progress of TF-LMH (Doing TF to 66 from 100M to 999M). I have been casually observing it for a few years. The rate of progress from 65 to 66 is MUCH MUCH less than half the expected rate of progress last round from 64 to 65. I think it is a combination of many of the older PC that were doing this have simply died ... but also many people are getting GPUs and can now do TF in the needed ranges instead of 100M+.[/QUOTE]
Ah! That reminds me that now I have my faster GPU back, I should throw in an occasional 332M TF job. |
[QUOTE=petrw1;343464]One sign to of this is the (lack of) progress of TF-LMH (Doing TF to 66 from 100M to 999M). I have been casually observing it for a few years. The rate of progress from 65 to 66 is MUCH MUCH less than half the expected rate of progress last round from 64 to 65. I think it is a combination of many of the older PC that were doing this have simply died ... but also many people are getting GPUs and can now do TF in the needed ranges instead of 100M+.[/QUOTE]
I suppose if trial factoring ever gets really, really far ahead of the wave, I may send my GPU to play with the big exponents for a while. As a test, I did 65 to 71 for something around 400 million and I got a real kick out of how fast 65 to 66 went. How does a person go about working on getting those exponents from 65 to 66? I don't see how it can be done through GPU72 because it will only let you factor to a minimum of 71. |
It doesn't really matter in GPU72. The lowest factoring level available in that range is 72. Things may be different by other means. Uncwilly is the most likely source of information about such things.
|
[QUOTE=TheMawn;343507]How does a person go about working on getting those exponents from 65 to 66? I don't see how it can be done through GPU72 because it will only let you factor to a minimum of 71.[/QUOTE]
One doesn't. At least currently. GPU72's agenda is to help with the LL and DC (and now the "100M digits") wave-fronts. Thus, we carefully balance the available firepower with the available candidates, taking into consideration James' analysis as to how deep TFing makes sense. Currently we're taking everything available in the 60M and above range to 74 "bits". I'm hoping we can start going to 75 bits at 64M or 65M. |
[QUOTE=TheMawn;343507]How does a person go about working on getting those exponents from 65 to 66? I don't see how it can be done through GPU72 because it will only let you factor to a minimum of 71.[/QUOTE]
That has to be done manually. Get the exponents from the server pages (you may get there via GPUto72 "Trial Factored Depth" reports) in text form, paste them in an empty worktodo.txt file, and edit it. Report the findings through the server manual pages. |
[QUOTE=chalsall;343510]Thus, we carefully balance the available firepower with the available candidates, taking into consideration James' analysis as to how deep TFing makes sense.
Currently we're taking everything available in the 60M and above range to 74 "bits". I'm hoping we can start going to 75 bits at 64M or 65M.[/QUOTE] As I recall, that analysis is dependent on a GHz-Days-ish measure of how much work it is to trial factor vs LL-test. The idea is the odds of finding a factor between 2[SUP]70[/SUP] and 2[SUP]71[/SUP] is roughly 1 in 70 (most of my work has been in the low 70s and, lo and behold, about 1 in 70 turn up a factor) so it makes sense to spend about one seventieth of the time it takes to run an LL test, on trial factoring. On the other hand, GPUs are ridiculously fast at trial factoring (my GTX 670 is equivalent to 285GHz). If that speed can't be harnessed for LL-tests (I recall reading somewhere that CUDALucas is faster than Prime95, but definitely not THAT much faster) then it might make sense to go deeper with the TF than the analysis suggests. It would make sense on a time basis (factoring an exponent up to 74 would take me about 5 hours, but doing the LL-test would take me nearly 500) but not so much on a GHz-Days basis. |
[QUOTE=TheMawn;343548]
On the other hand, GPUs are ridiculously fast at trial factoring (my GTX 670 is equivalent to 285GHz). If that speed can't be harnessed for LL-tests (I recall reading somewhere that CUDALucas is faster than Prime95, but definitely not THAT much faster) then it might make sense to go deeper with the TF than the analysis suggests. It would make sense on a time basis (factoring an exponent up to 74 would take me about 5 hours, but doing the LL-test would take me nearly 500) but not so much on a GHz-Days basis.[/QUOTE] We don't know of the performance of LL tests on AMD cards yet, but anyways looking on the CuLu benchmarks I don't know why I(only I probably) would run a LL test on a gpu, considering the speed is not a *huge* lot faster compared to a similar level cpu, and I could run TF on my gpu... |
[QUOTE=kracker;343549]We don't know of the performance of LL tests on AMD cards yet, but anyways looking on the CuLu benchmarks I don't know why I(only I probably) would run a LL test on a gpu, considering the speed is not a *huge* lot faster compared to a similar level cpu, and I could run TF on my gpu...[/QUOTE]
Aight. That's basically what I was saying, but better put. I'm glad someone understood. Basically, why run 2x faster LL if I can run 50x faster TF? This is why I think it might be worth coming back to trial factor an extra bit if we get far enough ahead of the wave. I would rather do work today to clear out tomorrow's exponents rather than next year's. |
[QUOTE=TheMawn;343548]As I recall, that analysis is dependent on a GHz-Days-ish measure of how much work it is to trial factor vs LL-test.
...then it might make sense to go deeper with the TF than the analysis suggests.[/QUOTE]The oft-quoted [url=http://www.mersenne.ca/cudalucas.php?model=487]analysis[/url] measures TF vs L-L both on a GPU. Currently that analysis assumes mfaktc vs CUDALucas, and ignores the P-1 on GPU part of the equation. When I get CUDALucas on AMD benchmarks, and CUDAPm1 becomes somewhat stable, I will attempt to rebalance the analysis. |
[QUOTE=TheMawn;343507]I got a real kick out of how fast 65 to 66 went.
How does a person go about working on getting those exponents from 65 to 66? I don't see how it can be done through GPU72 because it will only let you factor to a minimum of 71.[/QUOTE] 2[SUP]74[/SUP]/2[SUP]66[/SUP] = 2[SUP]8[/SUP] [QUOTE=chalsall;343510]One doesn't. At least currently. GPU72's agenda is to help with the LL and DC (and now the "100M digits") wave-fronts. Thus, we carefully balance the available firepower with the available candidates, taking into consideration James' analysis as to how deep TFing makes sense. Currently we're taking everything available in the 60M and above range to 74 "bits". I'm hoping we can start going to 75 bits at 64M or 65M.[/QUOTE] As you may have noticed, Chris and I don't always agree on these matters. Around 8.00 UTC I was assigned a 64M expo to LL. Virgin territory. ETA 20th Sept (~2Ghz Core2) TFed to 74 and P-1 ed well (although I could have done this myself). Yummy. 74 bits is one more than the sustainable level. Guess what I intend to do with it. [spoiler]Stick it up .........[/spoiler] David |
[QUOTE=TheMawn;343507]I suppose if trial factoring ever gets really, really far ahead of the wave, I may send my GPU to play with the big exponents for a while. As a test, I did 65 to 71 for something around 400 million and I got a real kick out of how fast 65 to 66 went.
How does a person go about working on getting those exponents from 65 to 66? I don't see how it can be done through GPU72 because it will only let you factor to a minimum of 71.[/QUOTE] As has been mentioned by prior posts this may not be the best use of a GPU but if you simply select work type of TF-LHM it will assign you TF to 66 work in that range (currently 194M). If you really want 400M, for example I believe you have to get it manually ... but I have to guess the big players in these higher ranges probably wrote something of their own to do this: Players like (not meant to be all inclusive): monst, America64, Linded, cl0ck3r ... basically anyone who completes A LOT of TF results at once in those higher ranges according to: [url]http://www.mersenne.org/report_recent_cleared/[/url] |
I regret to say this, but I'll be taking a semi-hiatus from the project until further notice. Due to the cost of electricity these days, I can no longer afford to keep my computers on for as long as I used to. I'll still be doing GPU factoring, but my throughput will be noticeably reduced. I hope everyone understands.
|
[QUOTE=ixfd64;345104]I hope everyone understands.[/QUOTE]
Of course. Thank you for what you've done, and what you might continue to do. |
Agreed. Thanks for all you've done. :smile:
|
[QUOTE=petrw1;343614]As has been mentioned by prior posts this may not be the best use of a GPU but if you simply select work type of TF-LHM it will assign you TF to 66 work in that range (currently 194M). If you really want 400M, for example I believe you have to get it manually ... but I have to guess the big players in these higher ranges probably wrote something of their own to do this: Players like (not meant to be all inclusive): monst, America64, Linded, cl0ck3r ... basically anyone who completes A LOT of TF results at once in those higher ranges according to: [URL]http://www.mersenne.org/report_recent_cleared/[/URL][/QUOTE]
Tbh, I was a little amused to hear myself called a "big player" when I'm only running a GTX 460 plus a couple cores CPU power. As far as my method for doing bulk, I 1) create a report like this: [url]http://mersenne.org/report_factoring_effort/?exp_lo=666660000&exp_hi=666670000&bits_lo=0&bits_hi=999&txt=1&B1=Get+Data[/url] 2) Select all, copy, then paste into a blank worktodo.txt (Notepad) 3) Find/Replace, in this case "6666" with "Factor=6666" 4) Find/Replace ",," with ",xx", with xx as final bit depth desired. 5) Save, then run mfaktc. Rinse, lather, repeat. I usually use mersenne.info's "Change" metric to pick a range with little or no recent activity. If I have a line like Factor=4601Factor=460123,65,66 I'll use F/R to replace xFactor= with x, for 9>=x>=0 |
If my .txt looks like this, does the second exponent just throw and error and the program moves on? That would be a decent way to try to factor everything to a specific depth and leave everything else behind.
666665983,68,70, 666666071,73,70, This is if I was trying to factor EVERYTHING to 70 but no higher. |
[QUOTE=TheMawn;345221]This is if I was trying to factor EVERYTHING to 70 but no higher.[/QUOTE]Yes, but if would make more sense to exclude exponents already factored to 2[sup]70[/sup] (or beyond) when you're gathering your data.
|
[QUOTE=James Heinrich;345222]Yes, but if would make more sense to exclude exponents already factored to 2[sup]70[/sup] (or beyond) when you're gathering your data.[/QUOTE]
Whoops completely did not see that feature. Glad someone is paying attention. |
We're out of LL tests from GPU72. Is that just a temporary shortage? TF, P-1 and DC seem to work ...
|
[QUOTE=Bdot;345247]We're out of LL tests from GPU72. Is that just a temporary shortage? TF, P-1 and DC seem to work ...[/QUOTE]
There seem to be 20 of various depths, ATM. That [I]is[/I] a pretty small pool. |
[QUOTE=Bdot;345247]We're out of LL tests from GPU72. Is that just a temporary shortage? TF, P-1 and DC seem to work ...[/QUOTE]
The last few LL requests I made with one of my PCs came from PrimeNet instead of GPU72 . There could be an issue with G72 getting new LL work . |
[QUOTE=petrw1;345255]The last few LL requests I made with one of my PCs came from PrimeNet instead of GPU72 . There could be an issue with G72 getting new LL work .[/QUOTE]
Yes... Something's going on. Part of it was a SPE on my part -- I was only transferring to "Anon" candidates below 52M. The 20 showing as available are in fact in a "Warn" state -- the original person assigned each candidate didn't "claim" it within 60 days, and "Spidy" recaptured it. I've brought in fifty candidates to temporarily satisfy requests; unfortunately they're all up in the 61M range. I'll monitor Spidy tonight to see what's going on, and to refill the queue with lower candidates. While I'm writing -- sorry for not being as attentive as I should be lately; in the middle of two big contracts. Issues, questions, etc are being queued, and will be dealt with as possible. |
Well, I've been grabbing LL-TF assignments in batches of 100 which covers me for a couple of weeks. Should I be grabbing less, more often?
I didn't have any problems. |
AFAI understood, the issue reported concerned LL tests, not LL-TF.
|
[QUOTE=lycorn;345373]AFAI understood, the issue reported concerned LL tests, not LL-TF.[/QUOTE]
You understood correctly. This has been fixed -- as usual, it was a stupid human error. |
| All times are UTC. The time now is 22:33. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.