![]() |
[quote=wblipp;169777]Are you are that Will Edgington already provides this service in the file factoredM.txt?
[URL="http://www.garlic.com/%7Ewedgingt/mersenne.html"]http://www.garlic.com/~wedgingt/mersenne.html[/URL] William[/quote] Yes, but that would require extra effort in the sense of cross-lookups. Syds database is much easier to work with, in this case. |
No sieve jobs for remote workers?
Hi Markus,
I just noticed something odd about the workers. At the moment I am running the remote worker client on two cores to help clean out a somewhat large backlog of jobs that is currently in the worker queue. As of this writing there are 11 ECM very high limits jobs and 11 normal-priority jobs (mostly sieve). Of course, with all these normal-priority jobs in queue, I would think that all of the ECM/sieve workers would be completely tied up with the normal-priority sieve tasks. However, instead what I am seeing is this: -The three local ECM workers are all doing sieve tasks -The two remote ECM workers (namely my clients) are doing ECM very high limits Is there some "feature" in the TCP server that's keeping the remote clients from being assigned sieve tasks? :wink: Based on past observation this would seem to not be the case; if memory serves I've seen your Q6700 remote workers be assigned sieve jobs before. Also, one other thing you may not be aware of: workers #11 and #12 (both Athlon XP's) seem to be clones of each other, and doing duplicate work. Is there something misconfigured in the settings that is causing one of them (presumably #12 since it's not getting any work of its own) to read from the wrong work queue? Max :smile: |
Hey,
Is there a reason why you allow composites in the 70-90 digit range to be processed using sieving and ECM to high limits? These numbers only take a max of ~4-5hrs to factor using SIQS. It would make more sense to only allow larger composites to be run using ECM high limits I take this back if by sieve you mean using something like msieve to factor the number? |
[quote=antiroach;170099]Hey,
Is there a reason why you allow composites in the 70-90 digit range to be processed using sieving and ECM to high limits? These numbers only take a max of ~4-5hrs to factor using SIQS. It would make more sense to only allow larger composites to be run using ECM high limits I take this back if by sieve you mean using something like msieve to factor the number?[/quote] I think that sieve only appears up to 85 digits. I believe it runs either msieve without -e or yafu with siqs(). I think it doesn't run ECM. |
So I wanted to try being a worker. After running the script for a few minutes Ive noticed a behavior similar to the one described in post #174. The server would assign me work on a C92, 120 curves at B1=90k. It appeared to complete the work (took maybe 4-5 minutes) and then it would just sit there. If i restart the client I would get some different work, and eventually I would get that C92 at the same exact B1. Once again it appeared to complete and then it just sat there. The client never exited as described in post #174. I'm going to hold off running the client for now until such issues are resolved.
On a side note, it would be pretty cool if the client output the number of ECM curves remaining to be done at the given B1. By the way, great work on the database! |
[quote=10metreh;167596]It just happened again with worker 11, this time for longer.[/quote]
Now they are shown as "No response for ...", so you actually know they are still working. [quote=smh;168050]I'm quit new to Linux, but the worker was very easy to set up. I notice that the worker status reports the composite 1 digit smaller than it actually is. Any plans for a Windows version?[/quote] I use mpz_sizeinbase, but unfortunaly I decreased the result by 1. Maybe the sourcecode already compiles in Windows, with some changes? I have no Idea how to compile in windows, so if anybody may try it? [quote=frmky;168056]I tried to upload the 100320 sequence to the database, but it keeps going awry in step 3712. First, one of the factorizations had all factors listed twice. The database then realized something was wrong and "repaired" it. Now, it gives GMP errors.[/quote] Thats a big bug that needs attention soon. If you submit lots of factors with several instances in parallel, these things happen. However, after some "repairing" and requesting it, it gets to a working state again. [quote=mdettweiler;168074]Hi Syd, I noticed that a whole boatload of primality proof jobs have started to pile up in the work queue (probably due to someone submitting a bunch of .elf files all at once), which gave me an idea: would it be possible to extend the remote worker client to other work types, such as TF and primality proof? Such a feature could possibly be implemented with a command line switch--say, by default it runs only ECM/sieve/P-1/P+1, but if you add a -t switch it can do trial factoring also, and with -p it can do primality proofs. I know it's somewhat rare that huge amounts of primality proof jobs pile up like this, but it would definitely be helpful when situations like this do occur. Secondly, since the two respective TF levels only have one worker apiece, if the workers are doing largish ECM very high limits jobs then they tend to sometimes force a TF job to wait a while until that particular worker finishes its current curve (which could take more than a minute at B1=3M). Allowing remote workers to help out with these worktypes could alleviate that somewhat. Max :smile:[/quote] The limit here is the database server itself, at about 50 "changes" per second, say new factors, prp->P, it is at full load. The primality proof worker takes the database close to that, another one gives only a minimal improvement. [quote=10metreh;168117]Another option would be to report that you've proved a number prime.[/quote] Anyone may submit composites as primes, thats like the ECM efforts problem. - no good idea in my opinion. [quote=schickel;168123]Just out of curiosity, what are the current stats on the DB: size, prime count, etc....?[/quote] Time to get some statistics: select type, count(type) from factors group by type; +------+-------------+ | type | count(type) | +------+-------------+ | ? | 5320744 | | P | 5715029 | | Prp | 18444 | | C | 3788124 | | CF | 10380969 | | FF | 5548848 | +------+-------------+ note that a composite with n factors includes n primes + (n-1) * "CF" [quote=CRGreathouse;169539]I have a question. Does the database query itself when doing p+1 and p-1 testing? That is, if there's a hard composite 1 away from a number to be factored, but it's already been factored in the database, is that used?[/quote] No, it's not used. [quote=J.F.;169775]Nevermind, I think I can work-around the problem with the format I just found on [URL]http://factorization.ath.cx/search.php?query=Mx&v=x&x=200&EC=1&E=1&Prp=1&P=1&C=1&FF=1&CF=1&of=T&pp=50[/URL][/quote] There is a function I use internally, maybe thats what you want: [url]http://factorization.ath.cx/search.php?simple=12345678[/url] it returns just the factors, no exponents, no html. [quote=mdettweiler;170066]Hi Markus, I just noticed something odd about the workers. At the moment I am running the remote worker client on two cores to help clean out a somewhat large backlog of jobs that is currently in the worker queue. As of this writing there are 11 ECM very high limits jobs and 11 normal-priority jobs (mostly sieve). Of course, with all these normal-priority jobs in queue, I would think that all of the ECM/sieve workers would be completely tied up with the normal-priority sieve tasks. However, instead what I am seeing is this: -The three local ECM workers are all doing sieve tasks -The two remote ECM workers (namely my clients) are doing ECM very high limits Is there some "feature" in the TCP server that's keeping the remote clients from being assigned sieve tasks? :wink: Based on past observation this would seem to not be the case; if memory serves I've seen your Q6700 remote workers be assigned sieve jobs before. [/quote] The worker clients can do sieving work, but the server does not assign it yet. There is no reason behind this, it will be changed later on. [quote=mdettweiler;170066] Also, one other thing you may not be aware of: workers #11 and #12 (both Athlon XP's) seem to be clones of each other, and doing duplicate work. Is there something misconfigured in the settings that is causing one of them (presumably #12 since it's not getting any work of its own) to read from the wrong work queue? Max :smile: [/quote] Thats an experiment, the work is split up between the cores. However, its not working as expected. [quote=antiroach;170099]Hey, Is there a reason why you allow composites in the 70-90 digit range to be processed using sieving and ECM to high limits? These numbers only take a max of ~4-5hrs to factor using SIQS. It would make more sense to only allow larger composites to be run using ECM high limits I take this back if by sieve you mean using something like msieve to factor the number?[/quote] its starts "msieve -v" [quote=antiroach;170133]So I wanted to try being a worker. After running the script for a few minutes Ive noticed a behavior similar to the one described in post #174. The server would assign me work on a C92, 120 curves at B1=90k. It appeared to complete the work (took maybe 4-5 minutes) and then it would just sit there. If i restart the client I would get some different work, and eventually I would get that C92 at the same exact B1. Once again it appeared to complete and then it just sat there. The client never exited as described in post #174. I'm going to hold off running the client for now until such issues are resolved. On a side note, it would be pretty cool if the client output the number of ECM curves remaining to be done at the given B1.[/quote] Seems like you used the old version of the worker. The new one is here: [url]http://factorization.ath.cx/worker.tar.bz2[/url] Its running quite stable atm. [quote=antiroach;170133]By the way, great work on the database![/quote] Thank you :smile: I tried to make the database recognize sequence merges, but its much more complex than I expected. Hope to finish it tomorrow! |
First of all, GREAT job on the database! It works very smoothly, and I can't believe how much data is stored on your site.
I noticed the database claiming a lie, though: [url]http://factorization.ath.cx/search.php?id=23314886[/url] is called a "CF" when it's clearly "FF". I verified that the factors displayed divide the number, that their product is the number, and that they are individually prime (verified with Pari's APRCL and Pocklington-Lehmer*). How did this come to be, and how can it be 'reset'? * [2 5 1] [3 2 1] [11 2 1] [264402913 2 1] [1824334609690109676199 2 [2, 3, 1; 3, 2, 1; 13, 2, 1; 37, 2, 1; 71, 2, 1; 474143, 2, 1]] |
It's now "FF". Is that you who's being putting boatloads of ~C95s into the VHL queue? (I spotted a few that looked like SNFSs among that as well.)
|
I've been putting a lot of 70 to 100 (SNFS-possible) and 100 to 130 (not SNFS) digit composites in HL. I mostly avoid VHL; the harder numbers I just factor on my own machine.
|
Please could whoever is running their own worker ATM change its name from "acer" to whatever its processor is. "Dell" does not say anything about my computer's speed etc.
|
[QUOTE=10metreh;171176]Please could whoever is running their own worker ATM change its name from "acer" to whatever its processor is. "Dell" does not say anything about my computer's speed etc.[/QUOTE]
I think whoever's running the worker [SIZE="1"](and it's not me, btw)[/SIZE] has the right to name it whatever they want. :smile: |
| All times are UTC. The time now is 21:04. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.