![]() |
strange thing happening in factordb
ok, it seems that someone is feeding factodb with tens of thousands 41 digit long composite... a faw hours ago, it got from 15 000 41-digits number, down to 0 and now there is about 30k new 41-digits composite... Syd, please stop this.. i'm using the yafu script to help diminish that number, but that won't be enough.
|
I saw that too, methinks that is an error somewhere in the elves, all these numbers are of the same form and when click on "more info" they all show as factors of 217104448554459483532131214896581817097980524333470831788895962466810110829472107
which is in fact = 3331 · 4746402517<10> · 931955472496582424848090961<27> · 14734464651098101010101014734464646510981<41> All the numbers of 41 digits that appear there are just running odd numbers (all of them, consecutive) from the C41 above, onwards. Smells fishy. edit: they are down to 13k from 22k "pieces" in the last 10 minutes. I put my "yoyo" on them too, so they will be gone in 10-15 minutes. |
While the p10 and p27 look reasonably natural, the p41 is quite obviously artificial. Very suspicious.
|
Look [url=http://www.mersenneforum.org/showpost.php?p=282846&postcount=547]here[/url] and [url=http://www.mersenneforum.org/showpost.php?p=282847&postcount=548]here[/url] and you see what's (who's) going on there. It's not the first time.
|
4800 left, I stopped my "yoyo". Anyhow, the server was handling them much faster then it could transfer them to me, I did about 100 and most of them ended up as "factors already known", so it was futile.
I wish we could get rid of C100 and higher, at this speed... :smile: edit: Now, I take this opportunity to remind Syd about displaying the last two digits of numbers with more then 50 digits... If he ever read [URL="http://www.mersenneforum.org/showpost.php?p=280505&postcount=1291"]this post of mine[/URL], that update would be very-very-very useful for aliquots. Pretty-pretty-please! 2800 left |
[QUOTE=kar_bon;282890]Look [URL="http://www.mersenneforum.org/showpost.php?p=282846&postcount=547"]here[/URL] and [URL="http://www.mersenneforum.org/showpost.php?p=282847&postcount=548"]here[/URL] and you see what's (who's) going on there. It's not the first time.[/QUOTE]
Idiot! Now he started with 40 digits. Can anyone block his IP? Or better would be implementation of the "user accounts" part, which seems to be an orphan on the page, and only allow registered users to create ID's? We would have nothing to lose... A reasonable policy could then be enforced on user accounts. Anyhow there is only a small bunch of people which are serious about this activity and report most of the factors... the DB will lose nothing by forbidding anonymous reports. edit: always 50000 numbers, now already down to 45k, he has a script for it. |
[QUOTE=LaurV;282892]edit: always 50000 numbers, now already down to 45k, he has a script for it.[/QUOTE]
Only Syd can block the IP. The two posts I gave were originally in the [url=http://www.mersenneforum.org/showthread.php?goto=newpost&t=13636]PARI's commands.[/url] thread so uses that perhaps (see also some posts above those two). |
Looking [url=http://www.mersenneforum.org/showpost.php?p=273465&postcount=8]here[/url] he's doing it like [url=http://factorization.ath.cx/index.php?query=1111111111111111111111111111110000000731*%281111111111111111111111111111110000000731%2Bk%29&use=k&k=1&VP=on&VC=on&EV=on&PR=on&FF=on&PRP=on&CF=on&U=on&C=on&perpage=20&format=1&sent=Show]this[/url]:
1111111111111111111111111111110000000731*(1111111111111111111111111111110000000731+k) for odd k-values In the next pages, all numbers are factored (79 digits). |
Is it down?
|
Still down.
"Down for ~2 hours (full backup, optimizing tables, ...)" |
It's back up, but there is a problem:
[URL]http://www.factordb.com/sequences.php?se=1&eff=2&aq=211122518&action=last20&fr=0&to=100[/URL] Why aren't the primes <300 digits being proven? Usually they go from PRP to P in a matter of seconds. Also, Why aren't there any workers? [URL]http://www.factordb.com/status.php[/URL] Oh my! The problem is worse than I thought: [URL]http://www.factordb.com/listtype.php?t=1&mindig=1&perpage=1000&start=0[/URL] Why haven't these been proven yet? |
If you look at the status page, you will see there are zero workers connected. It's probably related to the database having been down for a while - the workers haven't been restarted yet.
I'm more surprised by the PRPs from 300 to 800 digits. These seem to have a wide range of forms, but all have a creation date of today. |
There are 7 workers connected now and all the PRP's <300 digits are now proven. :)
|
and here he goes again, this time with 35 digits composite....
|
[QUOTE=firejuggler;282969]and here he goes again, this time with 35 digits composite....[/QUOTE]
False start. He has settled in at 61 digits now. |
It is apparent to any reasonable person that factoring all (say) 41-digit numbers will saturate and shut down the database (or a database of any size). It amounts to a DoS attack and such user should be banned from factorDB for a significant [URL="http://en.wikipedia.org/wiki/Denial-of-service_attack#Legality"]period of time[/URL].
|
[QUOTE=Batalov;282983]It amounts to a DoS attack ....[/QUOTE]
Hmm, and kar_bon appears to have found some useful documentation. |
So much disk space and bandwidth wasted - what a pity.
I tried to stop this several times and blocked quite a lot ip´s, most of them seemed to be proxy or tor exit node ip´s. I dont know what to do about this at the moment! |
[QUOTE=Syd;283041]So much disk space and bandwidth wasted - what a pity.
I tried to stop this several times and blocked quite a lot ip´s, most of them seemed to be proxy or tor exit node ip´s. I dont know what to do about this at the moment![/QUOTE]Put a Captcha on anything that tries to write to the database? |
[QUOTE=Syd;283041]So much disk space and bandwidth wasted - what a pity.
I tried to stop this several times and blocked quite a lot ip´s, most of them seemed to be proxy or tor exit node ip´s. I dont know what to do about this at the moment![/QUOTE] Require a login to write to the database? Then bad behavior can be stopped by deleting the user's account. Re: Xilman Put the captcha in the user creation process... then at least you only have to do it once and folks with automated (but legitimate) hooks into the database (bchaffin's workers?) won't be affected. |
[QUOTE=Syd;283041]I dont know what to do about this at the moment![/QUOTE]
If I may make some suggestions: - Make each user submit an e-mail address to which a confirmation e-mail is sent. Only once the user confirms the secret (I use a MD5 string of (truly) random data) are they given access. (Warning: this can be used to create annoying e-mails to third parties; limit one sign-up attempt per IP.) - Use "CAPTCHAs" on both the sign-up page and the confirmation page. - Establish a "trust" level for each user. Only after a user has proven they are legitimate (by submiting valid and valuable data) are they able to submit large amounts of data. Unfortunately, this kind of behaviour is common on the "Wild Wobbly Web".... :sad: |
force a log-in to write anything in the DB, and put a low limit to how many you can 'write' in the DB, something like 200 id per day.
edit : oh well, i'm late, follow bsquared and chalsall suggestion! |
or Laurv's suggestion :P
[QUOTE=LaurV;282892]... implementation of the "user accounts" part, which seems to be an orphan on the [factorDB] page, and only allow registered users to create [limited numbers of] ID's [per day]? <snip> A reasonable policy could then be enforced on user accounts.[/QUOTE] |
Is any of this why the db will no longer accept my primo certificates, zipped or singular?
How would any of the above suggestions affect the submission of factors via Aliqueit? Thanks for all the work keeping the db running. |
While factordb is not a part of mersenneforum, I consider behaviour like this - crapflooding a database that so many of our members use - a serious enough offense to warrant a permanent ban from the forum. Goodbye, cmd.
|
Additional primo certificate trouble:
I also cannot d/l new numbers to work. Two separate machines that were tested won't open the zips: (first machine) [code] Archive: ../Math/primo/1345/primo_batch_1345.zip [../Math/primo/1345/primo_batch_1345.zip] End-of-central-directory signature not found. Either this file is not a zipfile, or it constitutes one disk of a multi-part archive. In the latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. zipinfo: cannot find zipfile directory in one of ../Math/primo/1345/primo_batch_1345.zip or ../Math/primo/1345/primo_batch_1345.zip.zip, and cannot find ../Math/primo/1345/primo_batch_1345.zip.ZIP, period. [/code]and (second machine) [code] 7-Zip (A) [64] 9.20 Copyright (c) 1999-2010 Igor Pavlov 2010-11-18 p7zip Version 9.20 (locale=en_US.UTF-8,Utf16=on,HugeFiles=on,2 CPUs) Error: ../Downloads/primo_batch_1345.zip: Can not open file as archive Errors: 1 [/code]P.S. I was logged in for the previous trouble with uploading certificates, but not for these downloading attempts. Thanks for all the work you put into keeping the db running. |
I also can't download batches of primo work.
Also can I suggest that it might be an idea to increase to automatic proving bound to something higher from 300 once the current backlog is done. Last night I increased it to 380 with maybe an hour's processing. All the original work is done upto past 1300 digits and just needs maintaining. Once I can download input files again I will put some effort in to raising it further. |
I suspect the massive pointless inputs are being generated mostly through the "Factor Tables" pages. There were a several posts about doing this a few months back, including a comment that it was addictive. If I'm correct, perhaps the impact could be limited by not creating new IDs from these pages - or perhaps limiting these pages to logged in and blockable users.
|
I wonder of the Primo problems may be connected to the large number of PRPs between 300 and 800 digits that showed up when the database returned from downtime? I suspect table checking turned up some problems - perhaps corrupted certificates or perhaps they got overlooked originally.
|
prp zip are back to normal
|
Thanks everyone!
I just tracked down the error with the certificate files. The disk was filled up with logfiles! I just keep the last week, but that was too much this time. So no space left for zipping/unzipping the files. My intention with this database was to keep it open to everyone, so I´ll go with just a few new id´s per IP created without anything above that: Some little captcha, or: Log in. I´ve already raised the limits for a lot of accounts. |
Thanks again Syd, for the time you spend on this
|
Started the first 300 prps
|
All's fine here. Uploaded and downloaded some certificate work.
Thanks, Syd. Appreciate all your efforts. |
[QUOTE=akruppa;283052]While factordb is not a part of mersenneforum, I consider behaviour like this - crapflooding the a database that so many of our members use - a serious enough offense to warrant a permanent ban from the forum. Goodbye, cmd.[/QUOTE]
if C40's can wreck it I wonder what all the C126 + 's I've got in my current sequence are doing been doing my sequence for 37 lines and all are C126 or greater. |
Your C126s are not being handled automatically, and there are only dozens of them rather than tens of thousands.
|
[QUOTE=fivemack;283104]Your C126s are not being handled automatically, and there are only dozens of them rather than tens of thousands.[/QUOTE]
feels like it from how long they take some times if I kept all my data collection stuff properly it takes almost 400 hours to crack a low C127 by what I have written down but I haven't kept records of timings for all lines. |
[QUOTE=science_man_88;283105]feels like it from how long they take some times if I kept all my data collection stuff properly it takes almost 400 hours to crack a low C127 by what I have written down but I haven't kept records of timings for all lines.[/QUOTE]
I don't know now because I calculated that I've been doing this sequence barely 400 hours but have got 37 lines mostly completed , but I recorded the total time values the files gave me given so I may be confused. |
[QUOTE=Syd;283041]I dont know what to do about this at the moment![/QUOTE]
Call your ISP and tell them you are under a denial-of-service attack. They should have the ability to detect and stop this sort of thing before you even notice it. Also, while this may not immediately help the problem, make the code used by the database public. It's much easier to suggest improvements if people can see how thing are implemented currently. |
[QUOTE=Random Poster;283140]Also, while this may not immediately help the problem, make the code used by the database public. It's much easier to suggest improvements if people can see how thing are implemented currently.[/QUOTE]Careful, that's a double-edged sword.
It's much easier to find ways to cause damage if people can see how thing are implemented currently. |
[QUOTE=xilman;283142]Careful, that's a double-edged sword.
It's much easier to find ways to cause damage if people can see how thing are implemented currently.[/QUOTE] Not if things are implemented properly; if you are afraid to let people see your code, you shouldn't expose it to the Internet in the first place. Besides, the database is in severe need of independent code review; just yesterday I saw a composite of the form 9^x-2^x without any known factors, which is ridiculous since 7 is both an algebraic factor and a single digit prime. Since the database apparently uses yafu, it should at least let yafu do its usual trial division on every entered number. |
[QUOTE=Random Poster;283146]Not if things are implemented properly; if you are afraid to let people see your code, you shouldn't expose it to the Internet in the first place.[/QUOTE]True, but the critical word is the first "if" in your statement above.
If the code was implemented properly, we wouldn't be having this discussion in the first place. |
All prps proved upto 1300 digits. Pretty certain it shouldn't take more than an hour or two of processing a day to keep up most of the time.
|
For factoring the database composites automatically there's a perl script available. Maybe someone could update it to use primo.
[code]#!/bin/perl use warnings; use strict; use LWP::Simple; while(1){ print "get composites\n"; my $rand=int(rand(50))+13; my $mindig=90; my $contents = get("http://factorization.ath.cx/listtype.php? t=3&mindig=$mindig&perpage=1&start=$rand&download=1"); if (!defined $contents or $contents =~ /[a-z]/ ){ print "$contents\n"; print "Error, no composites fetched\n"; sleep(60); } my @composites=split(/\s/, $contents); foreach my $composite (@composites) { print "Factoring ".length($composite)." digits: $composite\n"; my @results; open(YAFU, "yafu siqs($composite) -p -v -threads 2|") or die "Couldn't start yafu!"; while (<YAFU>) { print "$_"; chomp; if (/^[CP].*? = (\d+)/) { push( @results, $1 ); print "*****\n"; } } close(YAFU); if ( scalar(@results) > 0 ) { "===========================================================================\n"; print "report factors\n"; my $url="http://factorization.ath.cx/report.php?report=".$composite."% 3D".join('*',@results); #print "$url\n"; $contents=get($url); #print "$contents\n"; my $nofactors = ($contents =~ s/Does not divide//g); my $already_known = ($contents =~ s/Factor already known//g); my $added = ($contents =~ s/Factor added//g); my $small = ($contents =~ s/Small factor//g); print "\tNew factors added: " . ($added ? $added : 0) . "\n"; print "\tFactors already known: " . ($already_known ? $already_known : 0) . "\n"; print "\tSmall factors: " . ($small ? $small : 0) . "\n"; print "\tErrors (does not divide): " . ($nofactors ? $nofactors : 0) . "\n"; print "===========================================================================\n"; }else { print "Error, no result found\n"; sleep(60); } } } die;[/code] |
Think it can't be changed to use primo. As I know primo isn't available as command line tool.
yoyo |
[QUOTE=yoyo;283205]Think it can't be changed to use primo. As I know primo isn't available as command line tool.
yoyo[/QUOTE] Maybe we could write to primo author explaining our objective and telling about factordb.com... |
[QUOTE=pinhodecarlos;283209]Maybe we could write to primo author explaining our objective and telling about factordb.com...[/QUOTE]I don't know about that. He has discontinued development on the Windows version, and pulled the Large Integer package he had available for Delphi/C++ Builder. He seems very down on MS for some reason (could be based on the headline on his [URL="http://www.ellipsa.eu/"]front page[/URL]....)
He is still working on the Linux version, however, and it will be able to blow the Windows version out of the water, since it has multi-core capability built in... |
Although Ellipsa (the 64-bit linux version of primo), isn't command line friendly, I have done a lot of the 1000-1355 digit certificates with it on a couple of my machines over the last few weeks. The fact that you can load a batch of certificates at once allows for less manual work. Granted, it's not doing everything itself, but all I do is d/l a batch of *.in files from the db, unzip, start Ellipsa, build, load and come back later to zip and report. If I want to let it run through tomorrow, I d/l more *.in files. If I want to check it in a few hours, I do a single size digit group. Right now I have one machine doing 1354 digits and the other doing 1355 digits. I'll u/l them in a few hours. Then I'll probably set one machine to work overnight.
|
[QUOTE=EdH;283232]Although Ellipsa (the 64-bit linux version of primo), isn't command line friendly, I have done a lot of the 1000-1355 digit certificates with it on a couple of my machines over the last few weeks. The fact that you can load a batch of certificates at once allows for less manual work. Granted, it's not doing everything itself, but all I do is d/l a batch of *.in files from the db, unzip, start Ellipsa, build, load and come back later to zip and report. If I want to let it run through tomorrow, I d/l more *.in files. If I want to check it in a few hours, I do a single size digit group. Right now I have one machine doing 1354 digits and the other doing 1355 digits. I'll u/l them in a few hours. Then I'll probably set one machine to work overnight.[/QUOTE]
I just did a similar technique to kill the tail upto 1300 digits. Easy to do. Might be an idea to create a thread containing info on who is doing what so there is no overlapping. EdH is doing certificates at ~1350 and I am keeping the tail behind him gone currently. There also isn't a central place where we can clearly see who is factoring composites at what digit levels currently. |
[QUOTE=EdH;283232]Although Ellipsa (the 64-bit linux version of primo), isn't command line friendly, I have done a lot of the 1000-1355 digit certificates with it on a couple of my machines over the last few weeks. The fact that you can load a batch of certificates at once allows for less manual work. Granted, it's not doing everything itself, but all I do is d/l a batch of *.in files from the db, unzip, start Ellipsa, build, load and come back later to zip and report. If I want to let it run through tomorrow, I d/l more *.in files. If I want to check it in a few hours, I do a single size digit group. Right now I have one machine doing 1354 digits and the other doing 1355 digits. I'll u/l them in a few hours. Then I'll probably set one machine to work overnight.[/QUOTE]The same thing works for the GUI version, except that you would run up against its ceiling sooner or later.
Too bad there's not a way to look up certs by user; I did some the last time we went through this, but I don't remember how high I went. |
[QUOTE=henryzz;283234]I just did a similar technique to kill the tail upto 1300 digits. Easy to do. Might be an idea to create a thread containing info on who is doing what so there is no overlapping. EdH is doing certificates at ~1350 and I am keeping the tail behind him gone currently. There also isn't a central place where we can clearly see who is factoring composites at what digit levels currently.[/QUOTE]
We could always set up a thread, if needed for digit reservations. No one has been duplicating more than a couple here and there for all the ones I've done in the last few weeks, though. If more get involved, we may need to start that thread and reservations. Or, a second more reaching thought, that would have to be left to Syd, would be a way to lock composites and primo batches for a short time, so as not to duplicate what is handed out. Since each number is tagged with an ID, it might be possible to issue a number or batch, lock it against reissue for a short time, allowing for processing and then either complete it, if results are received or free it up for someone else to work, if results are not received within the lock time. I'm not sure how difficult something like that would be to implement, or if this is even of interest at this time. I have several machines that default to yoyo's script, so they're working composites, until I assign them something else. Unfortunately, I'm also running into this too often: [code] unable to allocate 1003439448 bytes for range 0 to 0 Error, no result found [/code]In this case, no result would be returned, but the lock would have to run out before the composite could be reassigned. |
[QUOTE=schickel;283252]Too bad there's not a way to look up certs by user[/QUOTE]
Not by user but a list like [URL="http://factordb.com/certoverview.php?digits=1152&perpage=100&skip=0"]this[/URL] is available. Start on the Status page and click on the number of certs, then go from there. |
[QUOTE=RichD;283255]Not by user but a list like [URL="http://factordb.com/certoverview.php?digits=1152&perpage=100&skip=0"]this[/URL] is available.
Start on the Status page and click on the number of certs, then go from there.[/QUOTE]Yeah, I knew about the list, but sitting and trolling through a bunch of pages to find your name is pretty boring. Unless you can find yourself at a [URL="http://factordb.com/certoverview.php?digits=2975&perpage=100&skip=0"]landmark[/URL]....so I guess that answers how high the latest Primo can go: at least 2976 digits. |
[QUOTE=schickel;283259]....so I guess that answers how high the latest Primo can go: at least 2976 digits.[/QUOTE]
According to the [URL="http://ellipsa.eu/public/primo/top20.html"]Primo Top-20[/URL] page, the highest so far is 12903 decimal digits. (sounds like a nearby zip code...) edit: I'm actually thinking of trying something large, but not quite that big. 7 months is a bit of a stretch for my attention span... |
[QUOTE=EdH;283265]According to the [URL="http://ellipsa.eu/public/primo/top20.html"]Primo Top-20[/URL] page, the highest so far is 12903 decimal digits. (sounds like a nearby zip code...)
edit: I'm actually thinking of trying something large, but not quite that big. 7 months is a bit of a stretch for my attention span...[/QUOTE]And that was with splitting the task among other computers and recombining the pieces [I]by hand[/I]. So I guess my comment should be more along the lines of "3000 digits without special effort and in a resonable time". I can provide some timings if you like (after my current NFS job finishes). |
The new multicore linux version should do that sort of number in a few months without much effort.
There is not much chance of us reaching that sort of level any time soon though. 3000 digits might be a nice place to aim for. |
On one of my machines the 135x digit numbers are averaging less than 9 minutes, per. On the other one, they're averaging less than 14 minutes. Of course that says nothing of the processors/memory in use. My first machine is AMD Athlon 64 3GHz dual core, 2G mem and the second one is Pentium 64 2.5GHz dual core, 2G mem. If it is of interest, the certificates that the db provides have the time it took to run them. But, without machine info, I suppose it's just an interesting bit of trivia.
@henryzz: I had wondered why no new 1000-13xx numbers were appearing behind me.:smile: |
The latest garbage flooding seems to be 1000 digit numbers. There are a few hundred new PRPs in the high 900 digits. Clicking "more information", these all appear to have come from 1000 digit composites the differ only in a few digits near the middle of the number.
|
Today's flood is 10^n-4198862272127.
|
In the last 12 hours the number of unfactored composites through 120 digits has jumped from 160K to 250K. Many of these are below 70 digits, where the factordb is automatically factoring them, but they have arrived much faster than factoring can keep up. Sampling these, it looks like they are all in the range of Brent composites (a^n +/- 1, a and n < 10000). The Odd Perfect Number search has been interested in the minus cases where a and n are both prime, but these seem to be the entire Brent range.
One possibility is that somebody has decided to load Brent's entire table, and these are the incompletely factored numbers. Another possibility is that somebody is busy running factor tables to generate all of these. Does anybody know? |
[QUOTE=wblipp;285061]In the last 12 hours the number of unfactored composites through 120 digits has jumped from 160K to 250K. Many of these are below 70 digits, where the factordb is automatically factoring them, but they have arrived much faster than factoring can keep up. Sampling these, it looks like they are all in the range of Brent composites (a^n +/- 1, a and n < 10000). The Odd Perfect Number search has been interested in the minus cases where a and n are both prime, but these seem to be the entire Brent range.
One possibility is that somebody has decided to load Brent's entire table, and these are the incompletely factored numbers. Another possibility is that somebody is busy running factor tables to generate all of these. Does anybody know?[/QUOTE] Yesterday I passed on to Syd my t20 effort for all Brent composites from base 1001 to base 9999. They were also passed on to Prof. Brent, but have yet to appear in his factors.gz file. Mystery solved? |
It's over 278K now. I started worker for 67 digits, but composites are added much faster than i can facor them.
They all have small factors so I guess somebody is generating the tables instead of uploading the factorizations. Refactoring the small composites is probably, but for the bigger composites it would be nice if someone can upload the know factorizations instead of refactoring them all. |
[QUOTE=smh;285064]It's over 278K now. I started worker for 67 digits, but composites are added much faster than i can facor them.
They all have small factors so I guess somebody is generating the tables instead of uploading the factorizations. Refactoring the small composites is probably, but for the bigger composites it would be nice if someone can upload the know factorizations instead of refactoring them all.[/QUOTE] There might be two reasons for the small factors. One is that I just passed on my Brent factors (see my post above) and they are all greater than 10^9 as per his requirement. And also, I did not pass on any algebraic factors. Maybe there are some small algebraic factors present? |
[QUOTE=jcrombie;285063]Yesterday I passed on to Syd my t20 effort for all Brent composites from base 1001 to base 9999. They were also passed on to Prof. Brent, but have yet to appear in his factors.gz file. Mystery solved?[/QUOTE]
Thanks. It's a relief to know this surge represents the integration of extensive factoring external to the factordb, unlike some other events that were mindless drains on the factordb's resources for the amusement of seeing the resources get clogged. |
I encourage prople to run "the" yafu perl script
[code] #!/bin/perl use warnings; use strict; use LWP::Simple; while(1){ print "get composites\n"; my $rand=int(rand(1000)); my $contents = get("http://factorization.ath.cx/listtype.php?t=3&mindig=60&maxdig=80&perpage=1&start=$rand&download=1"); if (!defined $contents or $contents =~ /[a-z]/ ){ print "$contents\n"; print "Error, no composites fetched\n"; sleep(60); } my @composites=split(/\s/, $contents); foreach my $composite (@composites) { print "Factoring ".length($composite)." digits: $composite\n"; my @results; open(YAFU, "yafu factor($composite) |") or die "Couldn't start yafu!"; while (<YAFU>) { print "$_"; chomp; if (/^[CP].*? = (\d+)/) { push( @results, $1 ); print "*****\n"; } } close(YAFU); if ( scalar(@results) > 0 ) { print "===========================================================================\n"; print "report factors\n"; my $url="http://factorization.ath.cx/report.php?report=".$composite."%3D".join('*',@results); #print "$url\n"; $contents=get($url); #print "$contents\n"; my $nofactors = ($contents =~ s/Does not divide//g); my $already_known = ($contents =~ s/Factor already known//g); my $added = ($contents =~ s/Factor added//g); my $small = ($contents =~ s/Small factor//g); print "\tNew factors added: " . ($added ? $added : 0) . "\n"; print "\tFactors already known: " . ($already_known ? $already_known : 0) . "\n"; print "\tSmall factors: " . ($small ? $small : 0) . "\n"; print "\tErrors (does not divide): " . ($nofactors ? $nofactors : 0) . "\n"; print "===========================================================================\n"; }else { print "Error, no result found\n"; sleep(60); } } } die; [/code] |
[QUOTE=wblipp;285068]Thanks. It's a relief to know this surge represents the integration of extensive factoring external to the factordb, unlike some other events that were mindless drains on the factordb's resources for the amusement of seeing the resources get clogged.[/QUOTE]
Well, the idea was to just pass on my Brent factors directly to Syd instead of him getting them from Prof. Brent. Syd (aka Markus) is the one actually putting them in. Yes, I would also hate to see anything bad happen to this great resource! Jonathan |
[QUOTE=firejuggler;285073]I encourage prople to run "the" yafu perl script
... [/QUOTE] I will swap several machines back over to this... |
I'm trying to clean all bellow 67 digits.
|
[QUOTE=pinhodecarlos;285096]I'm trying to clean all bellow 67 digits.[/QUOTE]
I just noticed that all my machines are set to a minimum of 70 digits. Should I move that down to 68 or just let them run where they are? edit: There are over 1000 at 68, so I'll lower my minimum. |
Everything less than 70 digits is factored by Syd's worker. So you should factor composits with 70 or more digits.
yoyo |
[QUOTE=yoyo;285100]Everything less than 70 digits is factored by Syd's worker. So you should factor composits with 70 or more digits.
yoyo[/QUOTE] So, it doesn't help out the db to factor some of these, or will there just be too many collisions? I kind of figured at this point the db has a lot of composites quite a bit smaller, so it's probably working with those. Should we not worry about this type of build up and just let the db handle it on its own? Thanks... |
I think if you factor composites below 70 digits it will lead to collisions. Means the DB has a composite already factored if you upload it.
The workers of the DB factor composites below 70 digits. So there is computing power on this range and we should focus on a different range. yoyo |
[QUOTE=yoyo;285106]I think if you factor composites below 70 digits it will lead to collisions. Means the DB has a composite already factored if you upload it.
The workers of the DB factor composites below 70 digits. So there is computing power on this range and we should focus on a different range. yoyo[/QUOTE] I'm showing about a 50% collision rate at 68 digits, so either the db or someone else is working them. I'll move back to 70, then. Thanks. |
I've got one core running on the low-to-mid C70s but it won't be there all day.
(73-75 range) |
[QUOTE=RichD;285120]I've got one core running on the low-to-mid C70s but it won't be there all day.
(73-75 range)[/QUOTE] I've got 7 machines running composites (all at 71 digits) ATM, but I don't know for how long. I have these machines defaulted to yoyo's script until I decide what else to do with them, based on my ever-changing interest and short attention span... |
OK, switching to C75 to leave plenty of buffer room for [B]EdH[/B] workers.
(75-76 range) |
I've thrown one core on to help out (starting at C76).
|
[QUOTE=RichD;285125]OK, switching to C75 to leave plenty of buffer room for [B]EdH[/B] workers.
(75-76 range)[/QUOTE] Sorry! I thought I had enough room. There are about 1300 composites from 70 through 72. I didn't mean to crowd you up... |
I have 4 cores doing ECM for C70-C80
|
[QUOTE=EdH;285135]Sorry! I thought I had enough room. There are about 1300 composites from 70 through 72. I didn't mean to crowd you up...[/QUOTE]
Not a problem at all. I was thinking you had at least a dozen workers (cores) to my one. I thought I was going to be away for several hours and didn't want to return to find my lonely worker was stepping on your toes. In hindsight, I should have gone to C79 (and work down). (I think there is still a BOINC project working in the C80's and above. This should keep them busy for a few days.) :-) |
I've been seeing a number of factor already known messages for my C76s so I've upped it to C78 now.
Jeff. |
I have done some PRP Certificates and uploadet.
|
[QUOTE=Jeff Gilchrist;285148]I've been seeing a number of factor already known messages ...[/QUOTE]
I believe that message is a little misleading. What it really means is the prime factor is already known to the data base. Most likely a divisor of another composite. I think all primes below p19-p20 are known. As bigger factors(primes) are found, it is more unlikely they are known (to the data base). Too many primes, too little time. :-) |
[QUOTE=RichD;285140]Not a problem at all. I was thinking you had at least a dozen workers (cores) to my one. I thought I was going to be away for several hours and didn't want to return to find my lonely worker was stepping on your toes.
In hindsight, I should have gone to C79 (and work down). (I think there is still a BOINC project working in the C80's and above. This should keep them busy for a few days.) :-)[/QUOTE] I do have 8 cores across 7 machines and I didn't realize how fast they would fall. There are less than 400 composites left from 70 through 72 digits, so my machines are grabbing some 73s, now. There must be quite a few workers helping out... I will be cutting back shortly. As I mentioned before, these machines default to yoyo's script when they aren't otherwise tasked. I am about to retask them... |
I was seeing 74 digit composites showing up, so I have swapped the machines to something else...
|
My lonely core is also off-line, but I think we did a significant contribution.
Others more than me. :-) |
[QUOTE=RichD;285187]My lonely core is also off-line, but I think we did a significant contribution.
Others more than me. :-)[/QUOTE] All contributions count! All my 24/7 machines, but two dual cores, are steam-driven P4s. And, one dual core is so ancient, it is 64-bit, but won't run a 64 bit OS. The other is a friend's that I'm stress testing. |
Looks like things are starting to clear up. If you ask for a composite above 70 digits, the lowest it is handing out now is C81.
Although I think I found a small bug, if you ask for max digits 80, it still hands out C81 instead of returning nothing. Look at: [url]http://factorization.ath.cx/listtype.php?t=3&mindig=70&maxdig=80&perpage=20&start=0[/url] |
[QUOTE=Jeff Gilchrist;285337]Although I think I found a small bug, if you ask for max digits 80, it still hands out C81 instead of returning nothing.
Look at: [URL]http://factorization.ath.cx/listtype.php?t=3&mindig=70&maxdig=80&perpage=20&start=0[/URL][/QUOTE] Why do you expect the database to recognize a "maxdig" parameter? |
[QUOTE=Random Poster;285377]Why do you expect the database to recognize a "maxdig" parameter?[/QUOTE]Probably because of [URL="http://www.mersenneforum.org/showthread.php?p=230641&highlight=maxdig#post230641"]these[/URL] posts...
@Jeff, the maxdig parameter is ignored with that query. It is only used with the "getrandom" style queries which return ids for composites in a specific range. |
[QUOTE=schickel;285382]@Jeff, the maxdig parameter is ignored with that query. It is only used with the "getrandom" style queries which return ids for composites in a specific range.[/QUOTE]
Yes, someone posted a script that had the parameter in it, so I incorrectly assumed that it would work. If it is not supposed to work in that context, then there is no bug after all. Jeff. |
Yikes!!
Someone has dumped a bunch of numbers near google in the composite queue. I'll add a worker (and maybe a second a little later). |
They're going pretty fast. Must be more than just my worker running.
I'll drop out once it gets into the C80s. |
lost links?
I've recently noticed many n^n+(n+1)^(n+1) numbers have been removed (unknown) in the database. When I inquire they are added with small factors with the indicator "red asterisk". I have checked up to n=400. I also added the known factors to n=60. When inquiring the origin of the factor it states "Before November 4, 2018, 12:20 am". I have a table where I can add the known factors up to n=80K but that would be tedious.
Additionally, I recently added all the OPN numbers from the checkfacts.txt file (2.5GB) from [url=http://www.lirmm.fr/~ochem/opn/]this page[/url] over the past two years. I now question how many really "stuck". |
[QUOTE=RichD;531659]I've recently noticed many n^n+(n+1)^(n+1) numbers have been removed (unknown) in the database. [/QUOTE]
Looks like factordb has been wiped clean?! No numbers of the form 2^n-1 or 2^n+1. :xyzzy: |
[QUOTE=axn;531666]Looks like factordb has been wiped clean?! No numbers of the form 2^n-1 or 2^n+1. :xyzzy:[/QUOTE]
Also n^32+1 .... |
Someone is adding thousands of numbers of the forms k*I20000+1 and k*I20000-1. There are almost 8000 of them in the Unknown list.
|
| All times are UTC. The time now is 12:07. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.