![]() |
Looking for work that needs done
I'm going to work on extending sequences below 1M to 100 digits, and to a cofactor of at least 95 digits. Rumor has it that Wieb Bosma already has results for under 500K, so I'll start above that. I'm just posting here to avoid any duplication of effort; if anyone else is doing the same thing let me know and we can coordinate. Also, of course, let me know if I'm stepping on any reservations, but I'm assuming that anything <100 digits or with a small cofactor is not reserved by anyone.
For starters, I'll extend sequences in the 500k-600k range. [size="1"][B][Ed. note: Since we've wandered far afield, I broke this out into a separate thread....][/B][/size] |
[QUOTE=bchaffin;239508]For starters, I'll extend sequences in the 500k-600k range.[/QUOTE]
Those should all be to 100 digits from [URL="http://www.mersenneforum.org/showthread.php?t=12212"]Subproject 2[/URL], (same for 700k-900k, projects 3 and 5) but you should still be able to find some small cofactors to extend. I don't know of any other conflicts with your plan, but I don't keep up on all that too much. |
[QUOTE=Mini-Geek;239510]Those should all be to 100 digits from [URL="http://www.mersenneforum.org/showthread.php?t=12212"]Subproject 2[/URL], (same for 700k-900k, projects 3 and 5) but you should still be able to find some small cofactors to extend. I don't know of any other conflicts with your plan, but I don't keep up on all that too much.[/QUOTE]
Huh... I see over 200 sequences from 500k-600k which are at less than 100 digits. For example (seq/term/digits): 577828 638 63 525260 636 64 563844 637 64 559008 1793 71 532812 533 91 So did these just get lost somehow? Obviously if the original work is sitting around somewhere then I don't want to waste time recreating it. |
[QUOTE=bchaffin;239515]577828 638 63[/QUOTE]
This one merges with 486732 immediately. These sequences are small enough that the workers at the DB are running them right now (or at least I know that I've been watching this one and it's been pushing up a bit as I watch and refresh it). I think 486732 is in a range done by Wieb Bosma with result files not available yet. I'd expect most or all of the others you posted have similar stories, whether that's easy to detect or not (the old DB listed merges anywhere in the sequence, but now we just have to see if the sequence goes below its start). Edit: [CODE]577828 638 63 merges with 486732 at index 1 525260 636 64 merges with 486732 at index 2 563844 637 64 merges with 471540 at index 3 559008 1793 71 merges with 490700 at index 319 532812 533 91 (not sure) 486732 index 2 = 471540 index 1 = 899340 = 2^2 · 3 · 5 · 13 · 1153[/CODE] Edit 2: Just for curiosity's sake, I'm running 471540 a little bit and uploading to the DB. I'll stop at about 70-80 digits. The downdriver run went pretty low but eventually turned into 2^2*7, which it's still got at 67 digits. Edit 3: stopped at 71 digits. |
[QUOTE=Mini-Geek;239517]
I'd expect most or all of the others you posted have similar stories, whether that's easy to detect or not (the old DB listed merges anywhere in the sequence, but now we just have to see if the sequence goes below its start). [/QUOTE] Yes, good point, I forgot to screen for merges when posting those examples. But I don't think that goes for everything. Here are three more: 556228 755 93 1100000000252204350 563880 1193 93 1100000000252202419 565804 893 93 1100000000252207305 I scanned the DB and recorded the digit size and DB index of the final term for all unfinished sequences. Then I discarded sequences which end in the same DB index, keeping only the first (smallest) seed for each. So these sequences (and 231 others in 500k-600k) do not merge with any smaller sequence that is already in the DB. Everything <500k is at at least 70 digits, so it seems like further merges would be rare. |
[QUOTE=bchaffin;239529]Yes, good point, I forgot to screen for merges when posting those examples. But I don't think that goes for everything. Here are three more:
556228 755 93 1100000000252204350 563880 1193 93 1100000000252202419 565804 893 93 1100000000252207305 I scanned the DB and recorded the digit size and DB index of the final term for all unfinished sequences. Then I discarded sequences which end in the same DB index, keeping only the first (smallest) seed for each. So these sequences (and 231 others in 500k-600k) do not merge with any smaller sequence that is already in the DB. Everything <500k is at at least 70 digits, so it seems like further merges would be rare.[/QUOTE]Where did you get the original list of sequences that you scanned? All the open ended sequences that are in the DB that we talk about are based on the work done by Wolfgang Creyaufmueller. He originally worked all seequences under 1M up to increasing heights. If you check out his [URL="http://www.aliquot.de/aliquote.htm"]page[/URL], he has all his work posted for download. After the DB came online, everything was uploaded by members here. kar_bon also has a status [URL="http://rieselprime.de/"]page[/URL] for everything that is more current (but not quite up-to-the minute) that Wolfgang's. You might also want to check out you script, since it may not be quite doing what you expect. I fired up aliqueit.exe to check on these and found this with "detect merges" turned on (I had it turned off, so it ran off several hundred lines before I look in the file and noticed it):[code]value = 556228 (6 digits) 0 . c6 = 556228 = 2^2 * 241 * 577 1 . c6 = 422904 = 2^3 * 3 * 67 * 263 Sequence merges with earlier sequence 422904.[/code] Also, check [URL="http://factordb.com/sequences.php?se=1&eff=2&aq=346848&action=last20&fr=0&to=100"]here[/URL] and [URL="http://factordb.com/sequences.php?se=1&eff=2&aq=563880&action=last20&fr=0&to=100"]here[/URL]. And lastly:[code]value = 565804 (6 digits) 0 . c6 = 565804 = 2^2 * 37 * 3823 1 . c6 = 451380 = 2^2 * 3 * 5 * 7523 Sequence merges with earlier sequence 451380.[/code] If you download Wolfgang's C9C30 file, that's the best way to check for merges until Syd adds that back in to the DB code. This is a text file containing the 9 and 30 digit numbers reached by all open sequences. |
[QUOTE=schickel;239563]You might also want to check out you script, since it may not be quite doing what you expect. [/QUOTE]
Argh! Stupid me... I'm treating the DB like it's a static snapshot. I query something in the 400k range, and that generates a new term with a smallish composite cofactor, and then the workers (possibly me!) factor it and by the time I get to 500k the sequence has been extended. And since I was looking only at the endpoint, I missed the merge. Thanks for the references and for pointing out my bug. I'll do some more research before deciding where to apply my cores. |
[QUOTE=bchaffin;239581]Thanks for the references and for pointing out my bug. I'll do some more research before deciding where to apply my cores.[/QUOTE]How are you set up? Are you working sequences locally and uploading them to the DB or are you using the DB workers to extend things.
If you have Windows machines available, you can use [URL="http://www.mersenneforum.org/showthread.php?t=13365&highlight=aliwin"]AliWin[/URL] which automates [URL="http://www.mersenneforum.org/showthread.php?t=11618&highlight=aliwin"]Aliqueit[/URL]. With AliWin you can feed it a list of sequences which will be downloaded and worked to a specific length. Aliqueit can also submit factors automatically to the DB to simplify submitting work. |
[QUOTE=schickel;239613]How are you set up? Are you working sequences locally and uploading them to the DB or are you using the DB workers to extend things.
If you have Windows machines available, you can use [URL="http://www.mersenneforum.org/showthread.php?t=13365&highlight=aliwin"]AliWin[/URL] which automates [URL="http://www.mersenneforum.org/showthread.php?t=11618&highlight=aliwin"]Aliqueit[/URL]. With AliWin you can feed it a list of sequences which will be downloaded and worked to a specific length. Aliqueit can also submit factors automatically to the DB to simplify submitting work.[/QUOTE] Thanks for the separate thread. I'm on Linux, but I've already automated lots of aliquot-related things -- I have a wide variety of scanning/tracking/factoring scripts and customized aliqueit features. (I've gotten quite addicted to this problem!) I've been using those to, for example, automatically extend sequences >1M to 100 digits, or a little more if there's no driver. (I've also been devoting quite a lot of horsepower, when it's not working on seq 4788, to factoring smallish composites from the DB, which is what I meant when I said it may well have been me that was extending sequences in the background and foiling my merge detection.) But if the experts around here have suggestions of useful aliquot work to pursue, that would be helpful. Although recently I've had time to spend on the forums, that isn't always the case, so if there are things which can be done without a lot of coordination -- like DB workers or extending short sequences over some range -- I can start them off and leave them going when work gets busy again. |
[QUOTE=bchaffin;239622]But if the experts around here have suggestions of useful aliquot work to pursue, that would be helpful. Although recently I've had time to spend on the forums, that isn't always the case, so if there are things which can be done without a lot of coordination -- like DB workers or extending short sequences over some range -- I can start them off and leave them going when work gets busy again.[/QUOTE]Well, the work that would fill the biggest hole would be to fill in the work that Wieb Bosma either has done or may be doing. He was working the entire range between 250k and 400k. Both kar_bon and I have emailed him about our project and the DB. I got no reponse and kar_bon only got a reply back that he would be willing to share his data "sometime in the future". So far, nothing.
Having pulled the three ranges, 200-300k is all done, 300-400k is about 80% done, and 400-500k is about 50% done (with done being defined as at 100 or more digits in the DB). RobertS is doing some of the work, but I'll have to search a little more to find out what range and which direction he is working. I can also post files containing the results of my survey if you like, just let me know. |
[QUOTE=schickel;239625]Both kar_bon and I have emailed him about our project and the DB. I got no reponse and kar_bon only got a reply back that he would be willing to share his data "sometime in the future". So far, nothing.
[/QUOTE] Correct, no answer until today. I also had not fully updated my Summary pages with all data from the threads in here like I did long time ago (contributors, reservation/completion dates, small prime+exponents). This would take much time because this can only be done manually. The creation of those Summay pages automatically could be done, but without names and dates: What I can do: download every seq < 1M from the factorDB, scan it (determine max index, length of last term, perhaps small primes+exp) but all this is still timeconsuming. A full download for example of the range 0<n<100000 will take about 2-3 hours so about more than a day for all of them < 1M including a sequence-check. I've not yet tested to get only the last line and get the info. from that. If those data (names,dates) are not so important, I could create a script to get only raw data like those above. Nonetheless names/dates for terminations/merges I will list if available. I've just updated the latest merge in my pages and data for Seq 4788, too. No new stats or record-pages, yet. |
| All times are UTC. The time now is 09:56. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.