![]() |
I had assigned a sieving job to a worker (worker 11), but before the sieve finished, the worker went offline. However, the rumber remained assigned to that worker, although others were free.
Edit: It then resumed and finished, but that is an avoidable problem. |
[QUOTE=10metreh;167562]I had assigned a sieving job to a worker (worker 11), but before the sieve finished, the worker went offline. However, the rumber remained assigned to that worker, although others were free.
Edit: It then resumed and finished, but that is an avoidable problem.[/QUOTE] It just happened again with worker 11, this time for longer. |
I'm quit new to Linux, but the worker was very easy to set up.
I notice that the worker status reports the composite 1 digit smaller than it actually is. Any plans for a Windows version? |
[quote=smh;168050]I notice that the worker status reports the composite 1 digit smaller than it actually is.[/quote]
I bet the code was just int(log(n)). Easy to do by mistake. There's (or possibly there was) the same problem in factMsieve.pl: the "prototype def-par.txt line" always reports the size 1 digit smaller than it should have been. |
I tried to upload the 100320 sequence to the database, but it keeps going awry in step 3712. First, one of the factorizations had all factors listed twice. The database then realized something was wrong and "repaired" it. Now, it gives GMP errors.
|
As there are quite a number of aliquot sequences in the database: Does the database check for confluences?
|
Yes and no.
Yes, because the database stores numbers and their factors, so in the case of a confluence the database will already have the factors of the next n steps in a sequence and will report them. From the users point of view, it will appear that those steps were found instantly, for free. No, because the database doesn't know that a confluence just occurred, and doesn't report it. For example, 306 has a confluence with 276, but the database doesn't know that. The database reports 306 as open with 1635 steps. It would be nice if the database would classify it as "Sequenced ended, reason: confluence with 276" but that may not be easy to add. |
Hi Syd,
I noticed that a whole boatload of primality proof jobs have started to pile up in the work queue (probably due to someone submitting a bunch of .elf files all at once), which gave me an idea: would it be possible to extend the remote worker client to other work types, such as TF and primality proof? Such a feature could possibly be implemented with a command line switch--say, by default it runs only ECM/sieve/P-1/P+1, but if you add a -t switch it can do trial factoring also, and with -p it can do primality proofs. I know it's somewhat rare that huge amounts of primality proof jobs pile up like this, but it would definitely be helpful when situations like this do occur. Secondly, since the two respective TF levels only have one worker apiece, if the workers are doing largish ECM very high limits jobs then they tend to sometimes force a TF job to wait a while until that particular worker finishes its current curve (which could take more than a minute at B1=3M). Allowing remote workers to help out with these worktypes could alleviate that somewhat. Max :smile: |
[quote=mdettweiler;168074]Hi Syd,
I noticed that a whole boatload of primality proof jobs have started to pile up in the work queue (probably due to someone submitting a bunch of .elf files all at once), which gave me an idea: would it be possible to extend the remote worker client to other work types, such as TF and primality proof? Such a feature could possibly be implemented with a command line switch--say, by default it runs only ECM/sieve/P-1/P+1, but if you add a -t switch it can do trial factoring also, and with -p it can do primality proofs. I know it's somewhat rare that huge amounts of primality proof jobs pile up like this, but it would definitely be helpful when situations like this do occur. Secondly, since the two respective TF levels only have one worker apiece, if the workers are doing largish ECM very high limits jobs then they tend to sometimes force a TF job to wait a while until that particular worker finishes its current curve (which could take more than a minute at B1=3M). Allowing remote workers to help out with these worktypes could alleviate that somewhat. Max :smile:[/quote] Another load are now coming in - and the queue is getting bigger every second (25000 right now). It's all primality tests, and I don't know where they're coming from unless it's Frank putting on loads of .elf files. Another option would be to report that you've proved a number prime. |
[QUOTE=10metreh;168117]Another load are now coming in - and the queue is getting bigger every second (25000 right now). It's all primality tests, and I don't know where they're coming from unless it's Frank putting on loads of .elf files. Another option would be to report that you've proved a number prime.[/QUOTE]
It's just me dumping a few elf files in. Don't worry. It's only taking up one worker, and it'll clear shortly. :smile: |
[QUOTE=10metreh;168117]Another load are now coming in - and the queue is getting bigger every second (25000 right now). It's all primality tests, and I don't know where they're coming from unless it's Frank putting on loads of .elf files. Another option would be to report that you've proved a number prime.[/QUOTE]Don't look at me. I have every intention of carving out a dedicated niche for an aliquot server, so I'm not going to load Syd's DB down with a bunch of duplicate stuff.....
|
| All times are UTC. The time now is 06:20. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.