mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > News

Reply
 
Thread Tools
Old 2020-06-21, 15:55   #67
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

109816 Posts
Default

Quote:
Originally Posted by PhilF View Post
from almost anywhere on the globe.
It will be interesting to see how such a large constellation of satellites affects other space missions, and how the astronomers' concerns get addressed.
kriesel is online now   Reply With Quote
Old 2020-06-21, 19:01   #68
kruoli
 
kruoli's Avatar
 
"Oliver"
Sep 2017
Porta Westfalica, DE

22×32×7 Posts
Default

I really hope they'll cut that down. I remember a German newspaper reporting about new satellites with much less reflections. But when looking it up again, I only found this.
kruoli is offline   Reply With Quote
Old 2020-06-22, 01:29   #69
preda
 
preda's Avatar
 
"Mihai Preda"
Apr 2015

100100000112 Posts
Default

Quote:
Originally Posted by retina View Post
Not everyone has unlimited bandwidth.

Some people pay by the megabyte.
Yes I know as I have metered data myself. Not a problem though, I can still upload all the proofs I can produce.
preda is offline   Reply With Quote
Old 2020-06-23, 19:33   #70
Mark Rose
 
Mark Rose's Avatar
 
"/X\(‘-‘)/X\"
Jan 2013
Ͳօɾօղէօ

2×1,409 Posts
Default

Regarding using AWS, I would implement something like this:

1. Have the PrimeNet server generate a signed URL for uploading to S3. Pass this to the client, and the client uploads straight to S3. Compose the URL with the exponent and some random number to allow multiple results for the same result (it will happen).
2. Have new upload events on S3 trigger a message via Simple Notification Service (SNS), which then publishes a message to a Simple Queue Service (SQS) queue.
3. Configure an EC2 Auto Scaling Group to scale based on the number of messages in the queue. Make it scale down aggressively to zero when there are no messages, but scale up slowly when there are many in the queue.
4. When the verifier running on the instance is finished, post a result to another SQS queue.
5. Use an AWS Lambda function to process events from that SQS queue and post them to PrimeNet. Decoupling this from the verifier allows PrimeNet to go down and not lose results or having expensive verifier instances kept running while waiting for work.

I expect much of the above could be implemented with simple python or JS outside of the verifier.

Uploads to S3 have free bandwidth, but cost $0.000005 per PUT request. So 10000 results per day would cost a couple dollars. Transferring to the EC2 instances is free.

Storage costs get expensive overtime. If each result is 100 MB, that's about 1 TB per day. A trivial amount of bandwidth to handle on AWS, but that will soon cost huge money: even using the Infrequent Access Tier of S3 Intelligent Tiering, each month would generate ~$400/month/month in expenses. Even cheaper alternatives like BackBlaze B2 would get expensive quickly. So holding on to the results long-term may not be feasible.

Last fiddled with by Mark Rose on 2020-06-23 at 19:44
Mark Rose is offline   Reply With Quote
Old 2020-06-23, 21:59   #71
chalsall
If I May
 
chalsall's Avatar
 
"Chris Halsall"
Sep 2002
Barbados

23DF16 Posts
Default

Quote:
Originally Posted by Mark Rose View Post
Regarding using AWS, I would implement something like this:


Always listen to guys who've been there; done that.
chalsall is online now   Reply With Quote
Old 2020-06-23, 23:03   #72
Prime95
P90 years forever!
 
Prime95's Avatar
 
Aug 2002
Yeehaw, FL

7,043 Posts
Default

Quote:
Originally Posted by chalsall View Post


Always listen to guys who've been there; done that.
Indeed.

Pavel Atnashev has improved our original concept of the workflow (see post 46). We think the Primenet Server can handle the now reduced load and are confident that a malicious user can not sign up for verification work and screw up the system.

One point I really like from Mark's post is asking the server for a secure URL to upload the proof file. Should we need to switch to an AWS or other solution because of CPU or bandwidth resources at the PrimeNet server, we can redirect uploads to the AWS or other server -- no need to get clients to upgrade prime95 / gpuowl / whatever.

The AWS workflow Mark gave looks nice especially with the ability to take full advantage of cheap spot pricing. It's good to have a workable backup plan.

Last fiddled with by Prime95 on 2020-06-23 at 23:04
Prime95 is online now   Reply With Quote
Old 2020-06-24, 02:19   #73
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

23·32·59 Posts
Default

Quote:
Originally Posted by Mark Rose View Post
So 10000 results per day would cost a couple dollars. Transferring to the EC2 instances is free.

Storage costs get expensive overtime. If each result is 100 MB, that's about 1 TB per day. A trivial amount of bandwidth to handle on AWS, but that will soon cost huge money: even using the Infrequent Access Tier of S3 Intelligent Tiering, each month would generate ~$400/month/month in expenses. Even cheaper alternatives like BackBlaze B2 would get expensive quickly. So holding on to the results long-term may not be feasible.
Two questions on the rough costing.
1) George has stated an expected proof size of 150MB, and IIRC that it depends on exponent so will increase as exponents do. That's a small increase. Is 100MB used for simplicity or some other reason?

2) What is the basis of costing for a daily volume of 10,000, while the past 6 months has run 988/day or less and is trending downward (recent peak 729)? As an engineer I'm a believer in reserve capacity, but only to a moderate level. https://www.mersenne.org/primenet/graphs.php
After full deployment and conversion and completion of processing of leftover DC work, and ignoring the few percent that goes into verification, the processing capacity that had been going to DC can also go toward PRP first tests, but will only increase the test rate (and therefore the proof rate) by ~20-30%.

Last fiddled with by kriesel on 2020-06-24 at 02:27
kriesel is online now   Reply With Quote
Old 2020-06-26, 06:11   #74
GalebG2
 
Jun 2020

7 Posts
Default

Speaking of memory prices, is it me, or has the technology stagnated (or at least, progressed slowly) over the last couple years? From what I see, there's a gigantic growth in RAM, CPU speed, but the growth in memory options has not been nearly as fast, and that's why we're now stuck with high data conservation fees.
GalebG2 is offline   Reply With Quote
Old 2020-07-04, 03:10   #75
Prime95
P90 years forever!
 
Prime95's Avatar
 
Aug 2002
Yeehaw, FL

7,043 Posts
Default

Prime95 can now generate proofs. Can't upload them or verify them, but it is a start.

Some interesting data and a decision to be made.

For a 100Mbit number, the optimal proof power is 10. Requiring proof generator and verifier do 182K squarings. Temporary disk space is 12.8GB. Proof file size is 138MB.

The power=9 proof is almost as good at 239K squarings, disk space 6.4GB, proof file size is 125MB. The difference of 57K squarings is only 0.06% the cost of a full PRP test.

What about power=8? 413K squarings, disk space 3.2GB, proof file size is 113MB. The 231K extra squarings is 0.23% of a PRP test.


So the question is what should be prime95's default setting? The optimize-to-the-max in me says default should be power=9. The minimal-impact-on-average-user in me says go with power=8 default.

I think the biggest impact on the average user is the disk space consumed. Power=8 will result in less proofs lost due to disk full errors. The cost is just 0.17% of a PRP test.

I think the preferences dialog box needs a "Max disk space each worker can use" setting. From that I can derive the best power setting, but most users will never change this preference so the question just morphs into what the default for the max GB of disk should be,



NOTE: As exponents increase, squarings, disk space, and proof size increase roughly linearly.

Last fiddled with by Prime95 on 2020-07-04 at 04:10 Reason: Changed NOTE text
Prime95 is online now   Reply With Quote
Old 2020-07-04, 03:21   #76
retina
Undefined
 
retina's Avatar
 
"The unspeakable one"
Jun 2006
My evil lair

2×32×313 Posts
Default

My preference is for much less disk space, and more compute requirement.

Some of my crunching machine don't even have disks, they are just network booted into memory.

I think everybody has been optimising for compute intensiveness, and not space usage. So space won't be in such abundance for many people.
retina is online now   Reply With Quote
Old 2020-07-04, 03:30   #77
Prime95
P90 years forever!
 
Prime95's Avatar
 
Aug 2002
Yeehaw, FL

7,043 Posts
Default

Quote:
Originally Posted by retina View Post
Some of my crunching machine don't even have disks, they are just network booted into memory.
I understand. My "dream machine" builds of 5 years ago have 32GB disks feeding 5 (formerly 7) CPUs.
Prime95 is online now   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Your help wanted - Let's buy GIMPS a KNL development system! airsquirrels Hardware 313 2019-10-29 22:51
Is GMP-ECM still under active development? mathwiz GMP-ECM 0 2019-05-15 01:06
LLR 3.8.6 Development version Jean Penné Software 0 2011-06-16 20:05
LLR 3.8.5 Development version Jean Penné Software 6 2011-04-28 06:21
LLR 3.8.4 development version is available! Jean Penné Software 4 2010-11-14 17:32

All times are UTC. The time now is 23:13.

Fri Aug 14 23:13:41 UTC 2020 up 1 day, 19:49, 0 users, load averages: 1.54, 1.25, 1.19

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.