![]() |
Just in time TF'ing...
So, thanks to those who are choosing "LL Depth" factoring for your Colab Instances.
Somewhat interestingly, Cat 4 dropped into the high end of 105M last night. So this means any TF'ing work to 77 there gets picked up by an LL'er within a few minutes. Just so everyone knows, Cat 4 need about 100 assignments a day -- they always bite off more than they can chew... Most will be recycled, but some complete. And, also, they're in some ways P-1'ers for the Cat 2 and 3s. :smile: |
Almost bought at least a GTX 1650, maybe1660, today. I restrained myself, at least for today.
|
[QUOTE=kladner;538331]Almost bought at least a GTX 1650, maybe1660, today. I restrained myself, at least for today.[/QUOTE]
Please don't even consider the plain 1650, there is such a small price difference to the 1650 Super that it doesn't make sense. (Over 40% more CUDA cores on the Super version) Unless you're limited by having no PCIe power connector, and thus 75 W maximum power budget... 1660 vs. 1660 Super is different, there the difference is only the memory, which makes a difference in games but not TF. |
[QUOTE=nomead;538342]Please don't even consider the plain 1650, there is such a small price difference to the 1650 Super that it doesn't make sense.[/QUOTE][url]https://www.mersenne.ca/mfaktc.php?filter=1650|1660[/url]
The performance difference is clear, but it comes down to what price you can get each for as to what makes value sense. |
Thanks very much for that tip. I was having a hard time finding CUDA core numbers on the packages I was looking at. Thanks also for explaining the distinction between 1660 types. At this point, I might be looking for a good sale price, as long as it's not from Amazon. I try to work around them as much as possible.
Edit: And thanks, James. I will peruse the ratings before I jump into anything. |
[QUOTE=James Heinrich;538343][url]https://www.mersenne.ca/mfaktc.php?filter=1650|1660[/url]
The performance difference is clear, but it comes down to what price you can get each for as to what makes value sense.[/QUOTE] Some of the numbers there are not accurate, I think. For example, the GFLOPS for 1650 Super is way off. See [url]https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_16_series[/url] |
[QUOTE=axn;538348]Some of the numbers there are not accurate, I think. For example, the GFLOPS for 1650 Super is way off.[/QUOTE]Thanks for catching that, the 1650 was 10% too high and the 1650 Super was 30% too low. As always, please call out any errors you may spot in my data.
|
[QUOTE=chalsall;538011]Yeah... It will be good to do a bit of CPU work on the side (and/or, when GPUs aren't available).[/QUOTE]
Just a quick update on this... Alpha testing is going well. Beta testing will start tomorrow. Please PM me if you're interested in being a beta tester. It will involve a tiny change to the Notebook code. P.S. I both love, and hate, projects like this... MASSIVE feature creep... :chalsall: |
[QUOTE=chalsall;538371]Please PM me if you're interested in being a beta tester. It will involve a tiny change to the Notebook code.[/QUOTE]I would love to beta, but I away from my evil lair for a few days.
|
GPU72_TF Notebook. Not just for GPUs anymore!!!
OK... So, after finding a stupid greedy regex bug which could result in checkpoint files being lost for (ironically) large candidates, this is now ready for gamma testing. The last step before moving this into full production.
For anyone who would like to try this out, edit your GPU72_TF Notebook code to have this line: [CODE]!wget -qO bootstrap.tgz https://www.gpu72.com/colab/bootstrap_cpu.tgz[/CODE] Basicially, just add the "_cpu" string to the line. Then, relaunch your Notebook, and it will do CPU work in parallel. Currently, it's Cat 2 P-1'ing, but I plan to add other options soon. E.g. Cat 3 and DC P-1'ing. Note that this only works for those whose Primenet Username the system knows. I'll add a form over the weekend to let people do this themselves -- currently, it's manual DB work; PM me if you're eager to try this now. A CPU is created on Primenet with the same name as the Instance the job is running under, and under which the results will be submitted to Primenet. The checkpoint files are generated and thrown back to the server every ten minutes, so on average only five minutes of work is lost when an instance is killed. It is safe to stop a currently running GPU72_TF Session, make this change, and then restart. As always, SPE and/or "hmmm..." things welcomed... |
I'm interested.
I'd like to do DC-P1 work. Will it allow me to provide my own worktodo lines? Cool Wayne [QUOTE=chalsall;538535]OK... So, after finding a stupid greedy regex bug which could result in checkpoint files being lost for (ironically) large candidates, this is now ready for gamma testing. The last step before moving this into full production. For anyone who would like to try this out, edit your GPU72_TF Notebook code to have this line: [CODE]!wget -qO bootstrap.tgz https://www.gpu72.com/colab/bootstrap_cpu.tgz[/CODE] Basicially, just add the "_cpu" string to the line. Then, relaunch your Notebook, and it will do CPU work in parallel. Currently, it's Cat 2 P-1'ing, but I plan to add other options soon. E.g. Cat 3 and DC P-1'ing. Note that this only works for those whose Primenet Username the system knows. I'll add a form over the weekend to let people do this themselves -- currently, it's manual DB work; PM me if you're eager to try this now. A CPU is created on Primenet with the same name as the Instance the job is running under, and under which the results will be submitted to Primenet. The checkpoint files are generated and thrown back to the server every ten minutes, so on average only five minutes of work is lost when an instance is killed. It is safe to stop a currently running GPU72_TF Session, make this change, and then restart. As always, SPE and/or "hmmm..." things welcomed...[/QUOTE] |
| All times are UTC. The time now is 23:03. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.