![]() |
What's next?
[QUOTE=unconnected;261387]Added ~2000 indexes to 191400.
191400: i3163, size 115, [COLOR=Red]2^2*7 [COLOR=Black]So subproject #9 is completed.[/COLOR] [COLOR=Black]What's next?[/COLOR] [/COLOR][/QUOTE] RobertS has taken 200k-300k to 110 digits and is now working his way through the 300k range. The database workers are also bringing sequences up. So another 110 digits subproject may not be necessary. Here are the options: 1. A new 110 digits subproject for a range above 350k. This work will be done anyway (by RobertS etc.) - doing this project would just get it done slightly earlier. 2. A 120 digits subproject starting from 10k. I'm not sure how many of these sequences are already at 120 but it isn't all of them. This would have the disadvantage of leaving old machines without any "easy" work. 3. Something involving sequences above 1M - these have been randomly hit by database workers and are thus already mostly above 100 digits (I think). I don't like the idea of pushing the upper bound of the project any higher except for possible cycle searching. 4. Thinking of that, maybe it's time for a change - the record for longest aliquot cycle has stood at 28 terms for over 90 years and I somehow feel that some modifications to aliqueit and a project to take lots of sequences to a low height could break that record. 5. A combination of the above. What does everyone think? |
[QUOTE=10metreh;261413] The database workers are also bringing sequences up. [/QUOTE]
How do you join them? |
[QUOTE=em99010pepe;261414]How do you join them?[/QUOTE]
There is a system of internal database workers which factor low composites from the database, prove primes etc. Ben (bchaffin) also runs some scripts on the database which find the lowest composites (I'm not sure whether that's just aliquot composites or all of them - Ben?) and factor them. As a result of both of these, aliquot sequences get pushed up and downdrivers occasionally appear. Ben [url=http://www.mersenneforum.org/showthread.php?p=259101#post259101]said[/url] a few weeks ago that he would release his scripts soon, but he doesn't seem to have got around to it yet. |
[QUOTE=10metreh;261415]There is a system of internal database workers which factor low composites from the database, prove primes etc. Ben (bchaffin) also runs some scripts on the database which find the lowest composites (I'm not sure whether that's just aliquot composites or all of them - Ben?) and factor them. As a result of both of these, aliquot sequences get pushed up and downdrivers occasionally appear. Ben [URL="http://www.mersenneforum.org/showthread.php?p=259101#post259101"]said[/URL] a few weeks ago that he would release his scripts soon, but he doesn't seem to have got around to it yet.[/QUOTE]
The DB itself will factor anything <70 digits, and also runs a little ecm on larger numbers, so you occasionally see "spontaneous decay" of even largish composites. There have been several scripts posted in the forums which act as generic DB workers, picking up composites >70 digits and factoring them -- obviously many of these are not from aliquot sequences. (I can probably find you a pointer to the scripts if you want.) I ran my own version of generic DB workers for a long time, but recently wrote some tools which specifically factor composites from aliquot sequences. I just posted those scripts [URL="http://www.mersenneforum.org/showthread.php?p=261580#post261580"]here[/URL]. They do things in order of smallest composite, not smallest number of digits, but of course in general all the sequences get pushed upward. As for what's next: I would vote against another 110-digit subproject, since I think that among RobertS, Andi_HB, myself, and all the other folks contributing to the DB and quietly working unreserved sequences, this will happen fairly soon on its own. So I'd be in favor of either a 120-digit subproject, or a search for a longer cycle. Maybe my vote shouldn't count for much since I'll probably continue to spend most of my effort on my workers. But something shiny and new like a cycle search might lure a couple machines away. :smile: |
[QUOTE=10metreh;261413]...
4. Thinking of that, maybe it's time for a change - the record for longest aliquot cycle has stood at 28 terms for over 90 years and I somehow feel that some modifications to aliqueit and a project to take lots of sequences to a low height could break that record. ...[/QUOTE] I think AliWin would already work for this by simply setting [I][B]detect_merge = false[/B][/I] in the aliqueit.ini file. Right now, you would have to check the aliqueit.log manually to see why it stopped, but I'll add an automatic notification for cycle detection in the near future. What would be a good range and "low" height to work with? I'll do some testing... |
A combination of 2 and 4 seems to me like the best choice. I would like something that doesn't require gnfs because that is suboptimal in windows.
|
[QUOTE=EdH;261597]I think AliWin would already work for this by simply setting [I][B]detect_merge = false[/B][/I] in the aliqueit.ini file. Right now, you would have to check the aliqueit.log manually to see why it stopped, but I'll add an automatic notification for cycle detection in the near future...[/QUOTE]
I added the cycle notification and AliWin seems to work fine with testing around 10[SUP]20[/SUP] and 10[SUP]15[/SUP]. However, either the db runs these numbers when I inquire (prior to handing me the .elf) or they are all already run to 30-40 digits. Maybe someone has already looked for cycles? |
[QUOTE=EdH;261620]I added the cycle notification and AliWin seems to work fine with testing around 10[SUP]20[/SUP] and 10[SUP]15[/SUP]. However, either the db runs these numbers when I inquire (prior to handing me the .elf) or they are all already run to 30-40 digits. Maybe someone has already looked for cycles?[/QUOTE]
I think it must be the former -- the DB can't contain all sequences up to 10[SUP]15[/SUP] to 30-40 digits, that would be petabytes of storage. That's something to think about, actually. If we plan to scan tens of millions of sequences to a small size, we should consider the impact on the DB of uploading all the results. Maybe it would be better not to, or to only upload ones which are "interesting" by some definition. |
Agree with henryzz on 2 & 4.
I think 2) would be useful. Trying to generically get all entries to 120 digits would seem difficult as the nfs starts taking a long while. For 4, one route to take would be to cull the numbers first. Even a smaller limit like 100 million will take out all but 2%-3% of the numbers. Note that this is better than just looking for ones using 'detect-merge' because you don't include values of another smaller sequence. However, there is the issue of how to present the numbers. Perhaps files with a 10k range? Whether such a scheme is useful depends on the maximum I think. If we go to 60 or 80 digits, then culling the numbers first makes sense. For a limit like 30 digits, it might be faster to just do them all. |
[QUOTE=bchaffin;261631]I think it must be the former -- the DB can't contain all sequences up to 10[SUP]15[/SUP] to 30-40 digits, that would be petabytes of storage.
That's something to think about, actually. If we plan to scan tens of millions of sequences to a small size, we should consider the impact on the DB of uploading all the results. Maybe it would be better not to, or to only upload ones which are "interesting" by some definition.[/QUOTE] I agree with bchaffin that the db must be working the sequence when it is queried. I tried 10^30+74. It took a few seconds to display - the message at the bottom of the page actually said: [code] factordb.com - 18,470 queries to generate this page (21.78 seconds) [/code]The db showed size 41, i114, 2 * 3 * 52 * c39 Further, the db is advancing the sequence now because the c39 became an unfactored composite and all the subsequent composites are now adding into the list and being factored. They're all at the top due to their small size. I guess I unwittingly set about 100 new sequences into motion, simply by querying the db. I will not involve the db in any more of these searches. |
And, AliWin did find a sequence that ends in a cycle (8128), verified by the db::smile:
[URL="http://www.factordb.com/sequences.php?se=1&eff=2&aq=1000000000000168&action=last20&fr=0&to=100"]1000000000000168[/URL] Not a several term cycle, but a cycle, nonetheless... Edit: But, Wait! There's another one! AliWin just turned up this one in close proximity: [URL="http://www.factordb.com/sequences.php?se=1&eff=2&aq=1000000000000196&action=last20&fr=0&to=100"]1000000000000196[/URL] It ends in 1184/1210... This begs the question, is there somewhere that cycles should be catalogued, other than letting the db have them? |
| All times are UTC. The time now is 09:56. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.