View Single Post
Old 2016-12-06, 22:37   #1
GP2's Avatar
Sep 2003

50258 Posts
Default Batch jobs using AWS Batch

In the old days, looking for Mersenne primes meant writing your program on punch cards and submitting a batch job. The next day, or whenever, you'd go pick up your printout and find out that M4423 and M4253 are prime. These days computers have an interactive interface, but you don't really need that interactivity at all for the kind of number crunching we do.

Apparently Amazon is reviving the batch job concept by introducing AWS Batch. It's still in preview, so information is limited, but it sounds like you just specify how much memory you need and how many cores, and the rest is handled automatically.

Currently, there's often a mismatch between what virtual machine instances offer and what a calculation needs. LL testing needs as many CPUs as possible but very little memory; an ambitious P−1 or ECM stage 2 calculation might need the exact opposite. You can mix and match different types of work to try to get the optimal use out of an expensive instance with multiple cores and lots of memory, but it's easier if Amazon handles all that for you by scheduling everyone's big and small jobs in some optimal way, so you never have to worry about starting and stopping cloud instances yourself. By using resources more efficiently and being flexible about the completion time, perhaps the price might be lower too.

It's hard to say, since signing up for the preview might require a non-disclosure agreement, but it looks promising. Among other things, it might become trivial to purchase, say, a dozen LL tests in some defined range for some fixed price.
GP2 is offline   Reply With Quote