View Single Post
Old 2019-03-30, 14:58   #20
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

2·41·79 Posts
Default Costs

Cost will vary widely depending on the age/speed/efficiency of the computing hardware used, local electrical rates including any applicable taxes, whether the waste heat is a benefit for comfort heating or vented to the outdoors without additional cost or constitute a load raising air conditioning needs and cost, whether equipment is being purchased for the purpose and assumptions about depreciation rate or whether the equipment was already purchased for other reasons and only the possibility of increased wear and tear is considered, etc.

Some ballpark figures (all US$) mostly from my own fleet, for primality testing around 85M, per exponent tested, which include 4 year straight line depreciation, to zero salvage/resale value, of hardware purchased used or new, and the effect of US$0.11663/kw-hr, and neither heating benefit nor cooling penalty:

gpu, PRP in gpuowl or LL in CUDALucas, around $1.07 (Radeon VII), $2.29 (RX480) to $3.23 for modern new AMD or NVIDIA gpus, up to $4.75 to $9 for CUDA2.x old used gpus;
cpu, $3.37 for e5-2670 or e5-2690, up to $6.50 for i7-7500U based laptop, $6.70 for i7-8750H based laptop, $7.40 for X5650 tower, $9.30 for E5645 tower, $11.40 for E5520, $19.50 for Core 2 Duo, and 32-bit Intel processors are even higher. (Very old cpus can be both too slow for most assignments' expiration limits, and cost hundreds per primality test at 85M, or $3000 to $5500 for a pentium 133 which also takes about 45. YEARS!)
Price, timings and wattage for a used Samsung S7 phone running Mlucas 18 provided by ewmayer yielded around $8.60.

The electrical cost only, ranges from $0.81 (Radeon VII) or $1.71 (GTX1080) to $8 (Quadro 5000) for gpus tested; new laptops $0.72; e5-26x0 $2.20; i3-370M $3.36; X5650 $6.20; E5645 $7.55; E5520 $8.12; Core 2 Duo $12; S7 phone $2.93.

Costs only matter if the software will run the desired operands successfully. There was a time while Preda reported good results on a (16GB) Radeon VII, but users were unable to successfully run gpuowl P-1 on gpus on Windows. CUDAPm1 runs on a variety of NVIDIA hardware ranging from 1 to 11 GB, but is unable to do both stages on any gpu I have tried above p~432,500,000. Prime95 on the FMA3-capable i7-8750H seems to be the best bet for high p P-1; I have 901M running now.

For my natural gas heating, furnace specs, central AC specs and utility rates, the heating benefits reduce the net electrical cost by 20.6%, while the cooling costs will increase it 36% and the non-heating-season sales tax will increase it another 5.5%. (Sales tax is not applied to heating fuel or electricity during the heating season here.) These effects combine to make the marginal electrical cost 78% higher in the cooling season than in the heating season.
Therefore, some systems that are economic to run during the heating system are not when there's no heating benefit, and additional become uneconomic during the cooling season.

Using cloud computing is an interesting alternative. It's hard to beat free, as in free trials for hundreds of hours. Otherwise, costs vary, but around $7/85M is feasible, at spot rates; lower than the electrical cost for some of my existing hardware. Some rough data and links related to cloud computing for GIMPS follow.

How-to guide for running LL tests on the Amazon EC2 cloud
https://www.mersenneforum.org/showpo...21&postcount=1
Amazon 36 cores on EC2 with 144 GB RAM and 2x900 GB SSD is $0.6841 per hour.
2017 cost per primality test at 80M $6.21 (extrapolates to about $7.05/85M)
https://www.mersenneforum.org/showpo...6&postcount=23
2019 current EC2 costs ~$.019/hr $6.4 to 9.7 for 89M primality test (so ~$5.8 and up for 84M)
https://www.mersenneforum.org/showpo...37&postcount=2
Google Colaboratory "Colab" (free) https://www.mersenneforum.org/showthread.php?t=24839

M344587487 contemplating providing a PRP testing service at around $5/85M
https://www.mersenneforum.org/showth...138#post512138

https://www.phoronix.com/scan.php?pa...acket-Roll-Out
32 ARM cores @ 3.3Ghz + 128GB of RAM and 480GB of SSD storage $1/hour
This worked out per https://www.mersenneforum.org/showpo...9&postcount=23 to 30.73ms/iter at 84M, an astonishingly costly $717./84M exponent.
Ernst Mayer estimates several instances rather than a single instance would produce better performance and cost/throughput.
Debian 9, Ubuntu 16.04 LTS, and Ubuntu 18.04 LTS are the current operating system options for this Ampere instance type.
Numerous instance types here. Note discounts at reserved and spot. https://www.packet.com/cloud/servers/

google compute
https://www.mersenneforum.org/showpo...96&postcount=4
free trial
https://cloud.google.com/free/docs/gcp-free-tier

Microsoft Azure
https://www.mersenneforum.org/showthread.php?t=21440

https://www.atlantic.net/cloud-hosting/pricing/
https://www.hetzner.com/cloud
https://www.scaleway.com/pricing/
https://www.ovh.com/world/vps/
https://us.ovhcloud.com/products/ser...ucture-servers

Contrasting to personal gpu cost, ~$2 and up/85M https://www.mersenneforum.org/showpo...44&postcount=3
Judicious clock and voltage tweaking may improve those numbers. For an example of electrical power variation with clock, see https://www.mersenneforum.org/showpo...1&postcount=52


Top of this reference thread: https://www.mersenneforum.org/showpo...89&postcount=1
Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1

Last fiddled with by kriesel on 2020-02-20 at 20:52 Reason: misc updates in costs only matter paragraph, colab thread link added
kriesel is online now