![]() |
![]() |
#1 |
Aug 2020
79*6581e-4;3*2539e-3
232 Posts |
![]()
I was looking for cloud computing providers and found Serverspace who are relatively cheap.
16 cores advertised as "Xeon Gold 2nd gen 3.1 GHz" with 16 GB RAM and 50 GB SSD go for $0.164 / hour or $118.4 / month. Compared to AWS that seems very low. If I got it right, similar one would be a1.metal going for $0.257 / hour or $187.61 / month. Does anyone have experience with such less-known providers? Are the cores really fully available all the time? Do they object to 100% utilization of all cores 24/7? Any other recommendations are welcome. It should offer Linux with full SSH access. And I know it's usually cheaper to just have a computer at home, I'm just not sure I'd like the heat and noise, so I'm exploring options. |
![]() |
![]() |
![]() |
#2 | |
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
38 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
#3 |
Aug 2020
79*6581e-4;3*2539e-3
232 Posts |
![]()
I talked to their support and they said that upon long continuous full use of a core they will throttle the usage. So I pay for 4 vCPUs per hour but actually don't get 4 but maybe 2 or whatever they deem ok.
Is that the standard for virtual servers? For example AWS, is it the same? |
![]() |
![]() |
![]() |
#4 | |
If I May
"Chris Halsall"
Sep 2002
Barbados
23·3·439 Posts |
![]() Quote:
Although... Some empirical data... I /have/ found that M$'s Azure's throughput fluctuates considerably during mprime usage. I'm modeling that they don't provision vCores in pairs like other cloud providers. Thus, I don't use them. If I don't have both threads of a real core, I just don't trust it. Can't model throughput deterministically, and all types of possible attack vectors when doing paranoid level work. |
|
![]() |
![]() |
![]() |
#5 |
Aug 2020
79*6581e-4;3*2539e-3
52910 Posts |
![]()
Ok, so Serverspace is too cheap too be true. I wonder under which circumstances it would make sense to use them. Why would someone book 16 vCPUs and then be ok with only being able to utilize them for short periods of time or be throttled.
|
![]() |
![]() |
![]() |
#6 |
Aug 2020
79*6581e-4;3*2539e-3
232 Posts |
![]()
I talked to Linode and Kamatera and they both said that 100% utilization at all times is against their TOS for shared CPUs. Linode has dedicated CPUs that go for $240/mo ($0.36/h) for 16 cores/32GB.
If I'm not wrong, it would be cheaper to go with AWS spot instances. Before I dive into it, it is possible to continue the computation after the spot instance stopped? Does it matter how long it was interrupted? |
![]() |
![]() |
![]() |
#7 | |
If I May
"Chris Halsall"
Sep 2002
Barbados
244508 Posts |
![]() Quote:
It makes no difference how long each instance is interrupted, beyond the normal Primenet expiry rules. Good luck. Reach out if you have any questions. Last fiddled with by chalsall on 2022-02-12 at 15:25 Reason: s/time of/type of/; |
|
![]() |
![]() |
![]() |
#8 |
"/X\(‘-‘)/X\"
Jan 2013
56118 Posts |
![]()
Not all use-cases for a VPS require full CPU all the time. Like my backup VPS. It sits idle almost all the time, except when I'm pumping 1 Gbps to it over wireguard, which saturates its 4 vCPUs about 75%. I'm happy to pay less than to have reserved CPUs, as it's really the storage I care about.
|
![]() |
![]() |
![]() |
#9 | |
"/X\(‘-‘)/X\"
Jan 2013
1011100010012 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
#10 |
Aug 2020
79*6581e-4;3*2539e-3
232 Posts |
![]()
Ok, thanks. I still don't really understand the whole spot-instance scenario. While it's running, can I just access it via SSH same as a normal VPS? So it would just be like a VPS that gets shut down every now and then and after a while automatically reboots?
So everytime a termination notice is sent I need a script to run that tidies everything up? And integrating S3 storage seems not too easy, I found a blog post that used a lot of manual configuration without explaining every detail. Is there an easier way or a step-by-step tutorial to mount S3 storage on boot? I'll also want to use it for general purpose like CADO or LLR. In case of CADO it's probably best to have the server on a local machine (or cheap 1 core VPS) and the spot-instance as client? But to get feeling for how to use them mprime seems like a better first try. Last fiddled with by bur on 2022-02-14 at 06:54 Reason: The board software inserts line breaks occasionally... |
![]() |
![]() |
![]() |
#11 | |
"Ed Hall"
Dec 2009
Adirondack Mtns
32×7×73 Posts |
![]() Quote:
How I Create a Colab Session to Run a CADO-NFS Client for a Remote Server Unfortunately, I have no idea how to set up the server, local or cloud. |
|
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Good cpu/gpu for CudaLucas 2.06 | patgie | Hardware | 6 | 2018-08-13 15:09 |
Good air-cooler good enough for overclocked i7-5820K | RienS | Hardware | 17 | 2014-11-18 22:58 |
Is this a good PC to try for GPU computing? | Rodrigo | GPU Computing | 39 | 2011-07-21 21:27 |
v5 Changes ( looks good) | crash893 | PrimeNet | 0 | 2008-06-24 06:26 |
Good Plus k; what to do? | roger | Riesel Prime Search | 1 | 2007-07-10 06:50 |