A Desperate appeal! (by Richard K. Guy)... deadline is September 30, 2016
This recently appeared on Wilfrid Keller's Fermat factoring status page [url]http://www.prothsearch.net/fermat.html[/url] :
[QUOTE]A Desperate appeal!! by Richard K. Guy At the time of my 80th birthday, when John Conway and I were writing The Book of Numbers, John bet that there would be no further complete factorizations of Fermat numbers in the next 20 years. I bet him $20.00 that there would be. Now that my 100th birthday is only 3 years away, I'm beginning to get a bit worried! I'll even go so far as to split the $20.00 with anyone who completely factors F12 or F13 or any larger Fermat number!! (Note: The actual deadline is September 30, 2016; please compare UPINT, 3rd edition, page 16.) [/QUOTE] So first of all, Happy Birthday to Professor Guy! Second of all, this looks like an easy chance for someone to pick up $10... All that is needed is to complete factorization of one of the Fermat numbers. The ECM status page shows current effort on F12 through F14 at the 60digit level, but of course, even if such a factor is found, odds are still against the cofactor being prime, a few percent at best. P minus 1 is also a possible factoring method, SNFS is certainly beyond current capabilities. Beyond ECM and P1, are there any other feasible factoring methods? I would like to help him out if possible. 
If I prove F33 prime, would that count as a complete factorization?:smile:

Eh... yes?
(of course, this was just a wild guess; how can we be sure?! After all, some of the prime numbers don't have any known factors [URL="http://mersenneforum.org/showthread.php?p=222764#post222764"]less than 30 bits[/URL]...) 
What's the preferred way to do ECM for F12? Prime95 stage 1 and GMPECM stage 2? It looks like the 260M level is almost complete, and this thread ought to help with that.
I think those folks with sufficient memory should consider 850M curves instead it looks like those might need 1GB with prime95, or ??? with gmpecm stage 2. 
[QUOTE=Prime95;354695]If I prove F33 prime, would that count as a complete factorization?:smile:[/QUOTE]
Certainly  can you get it done within three years? 
So here's a factoring method I've been kicking around in my head for awhile, trying to figure out some way of making it more efficient. It appears to have complexity on the order of n[SUP]1/4[/SUP] so my estimates of the probability of it actually working on F12 are on the order of a googol to one, but it still is kind of neat, and might be well suited to efficient implementation on some sort of graphics card.
Euler observed that if one had two different representations of an integer as a sum of two squares, such as N = a[SUP]2[/SUP] + b[SUP]2[/SUP] = c[SUP]2[/SUP] + d[SUP]2[/SUP], that you could recover nontrivial factors of N by taking the GCD of N with a*c +/ b*d. In general, an odd integer N only has such representations when all factors congruent to 3 mod 4 occur only to even powers, and even then, it is difficult to find one such representation, let alone two. But Fermat numbers > F0 have all factors congruent to 1 mod 4, and one such representation already exists by their definition. So for example, F5 = (2[SUP]16[/SUP])[SUP]2[/SUP] + 1[SUP]2[/SUP], but if you know that it is also 62264[SUP]2[/SUP] + 20449[SUP]2[/SUP], this information is sufficient to find the factors. We can search by some sort of variation of Fermat's method: Check to see if F5  n[SUP]2[/SUP] is a perfect square for all n starting with 65536 and decreasing n by 1 each time. Most candidates can be eliminated as perfect squares by modular considerations. For example, any perfect square must be congruent to 0 or 1 mod 3. Also congruent to 0, 1, or 4 mod 5, and congruent to 0, 1, 2, or 4 mod 7. Each prime number will eliminate about half of the remaining candidates, on average, so sieving will result in a very small remaining number of candidates that must be tested by seeing if they are equal to their integer square roots squared. This sieving process can be table driven, and offers the possibility of searching a large number of candidates very quickly. Unfortunately, the search space is still huge, and the region searched efficiently, corresponding to n being fairly close to 65536, will still be a miniscule fraction of the entire search space for larger Fermat numbers. So we enhance the search as follows: Consider the set of all primes congruent to 1 mod 4, and include 2 as well. Now multiply F5 by various distinct products of these primes and look for representations as sums of squares. So we can work with 2*F5, or 5*F5, eventually getting to 10*F5 = 2*5*F5. The integer squareroot of 10*F5 is 207243, so we take this as our initial value of n and decrease by 1. This time, we don't have to search too far to discover that 10*F5  207241[SUP]2[/SUP] is a perfect square, so that 10*F5 = 207241[SUP]2[/SUP] + 917[SUP]2[/SUP]. Several representations of 10*F5 will already be known because of the known representations of 5 = 2[SUP]2[/SUP] + 1[SUP]2[/SUP], 2 = 1[SUP]2[/SUP] + 1[SUP]2[/SUP], and F5 = 65536[SUP]2[/SUP] + 1[SUP]2[/SUP], but it will be observed that we have added a previously unknown representation which will be sufficient to crack F5 into its new factors. So what is the advantage of using these multipliers? For one thing, if N is a product of k distinct nonrepeated prime factors, N will have 2[SUP]k1[/SUP] representations as a sum of squares, so many factors in N gives us more solutions to potentially find. F12 has 6 known prime factors and 1 known composite factor, so F12 has 64 known representations of the form a[SUP]2[/SUP] + b[SUP]2[/SUP] (where we assume a > b > 0), but given that the composite factor is known not to be a prime power, F12 must have at least 128 such representations. Any discovery of one more representation beyond the 64 known ones will give further information about the factors of F12. We can increase the number of representations enormously by multiplying F12 by a product of a large number of primes congruent to 1 mod 4, but unfortunately, each additional factor also increases the size of the search space. But there are many multipliers that can be used to search, and one might try these multipliers in some sort of systematic manner. I'll have more to say later, but hopefully this at least describes the germ of the idea. It would be interesting to hear any comments. 
[QUOTE=philmoore;354727]So here's a factoring method I've been kicking around in my head for awhile, trying to figure out some way of making it more efficient.
[/QUOTE] (1) You can't make it more efficient. (2) The method is well known and was used extensively by D.H. Lehmer with his photoelectric sieve, DLS127 and DLS157. [QUOTE] It appears to have complexity on the order of n[SUP]1/4[/SUP] so my estimates of the probability of it actually working on F12 are on the order of a googol to one, but it still is kind of neat, and might be well suited to efficient implementation on some sort of graphics card. [/QUOTE] It is pointless for numbers that we are doing today. 
I'm at work so can't read the whole thing, but this seems to be the paper by Lehmer:
[URL="http://www.ams.org/journals/mcom/197428126/S00255718197403424585/S00255718197403424585.pdf"]http://www.ams.org/journals/mcom/197428126/S00255718197403424585/S00255718197403424585.pdf[/URL] 
[quote]I'll even go so far as to split the $20.00 with anyone who
completely factors F12 or F13 or any larger Fermat number!![/quote] This is not a good deal. Guy will be $40 better off, not $20, if a factor is found. So he's only offering 25% of what the factor is worth to him, to the person who actually does the work. 
[QUOTE=Mr. P1;354779]This is not a good deal. Guy will be $40 better off, not $20, if a factor is found. So he's only offering 25% of what the factor is worth to him, to the person who actually does the work.[/QUOTE]
True, but, who in their right mind would be motivated by the $ aspect of this? :unsure: The cost (in electricity consumption) of a complete factorization would dwarf the reward, whether it is $10 or $20. Or am I just not getting the joke? 
[QUOTE=bsquared;354780]Or am I just not getting the joke?[/QUOTE]I believe so.

Don't worry, Ben, you are not alone.
A significant portion of the people who run GIMPS in high hopes to get the EFF's money one day don't get the EFF's joke. ;) 
The internet is a poor medium for deadpan.

[QUOTE=philmoore;354723]Certainly  can you get it done within three years?[/QUOTE]
:rofl::rofl::rofl: 
As a possible point of interest, [URL="http://www.mersenneforum.org/showpost.php?p=128485&postcount=10"]here[/URL] is a thread from several years ago discussing P1 testing of F31 and extrapolations to F33.

The good news is that the estimated time to do a Pepin test on F33 is down from several millenia over a decade ago to maybe only a century and a half on current hardware! Keep those improvements coming...

[QUOTE=philmoore;354827]The good news is that the estimated time to do a Pepin test on F33 is down from several millenia over a decade ago to maybe only a century and a half on current hardware! Keep those improvements coming...[/QUOTE]
On how many cores? 
[QUOTE=henryzz;354829]On how many cores?[/QUOTE]
I was thinking of some sort of GPU implementation. My estimate is crude  I was just extrapolating from the reported fact that one of newer Titans could LL a 100 million digit candidate in about 2.5 months. 
[QUOTE=ATH;354760]I'm at work so can't read the whole thing, but this seems to be the paper by Lehmer:
[URL="http://www.ams.org/journals/mcom/197428126/S00255718197403424585/S00255718197403424585.pdf"]http://www.ams.org/journals/mcom/197428126/S00255718197403424585/S00255718197403424585.pdf[/URL][/QUOTE] Thanks for the link  I had read about the Delay Line Sieve from Hugh Williams' book, but hadn't actually read this paper. Looking it over reminded me that one can also use the quadratic form x[SUP]2[/SUP] + 2*y[SUP]2[/SUP] as well when looking for factors of Fermat numbers. Primes congruent to 1 or 3 mod 8 have such representations. 
[QUOTE=bsquared;354794]The internet is a poor medium for deadpan.[/QUOTE]
You're talking about Mr. P1, right? I laughed. 
[QUOTE=bsquared;354794]The internet is a poor medium for deadpan.[/QUOTE]
You seem to handle it well. 
[QUOTE=philmoore;354727]So here's a factoring method I've been kicking around in my head for awhile, trying to figure out some way of making it more efficient. It appears to have complexity on the order of n[SUP]1/4[/SUP] so my estimates of the probability of it actually working on F12 are on the order of a googol to one, but it still is kind of neat, and might be well suited to efficient implementation on some sort of graphics card.
Euler observed that if one had two different representations of an integer as a sum of two squares, such as N = a[SUP]2[/SUP] + b[SUP]2[/SUP] = c[SUP]2[/SUP] + d[SUP]2[/SUP], that you could recover nontrivial factors of N by taking the GCD of N with a*c +/ b*d. In general, an odd integer N only has such representations when all factors congruent to 3 mod 4 occur only to even powers, and even then, it is difficult to find one such representation, let alone two. But Fermat numbers > F0 have all factors congruent to 1 mod 4, and one such representation already exists by their definition. So for example, F5 = (2[SUP]16[/SUP])[SUP]2[/SUP] + 1[SUP]2[/SUP], but if you know that it is also 62264[SUP]2[/SUP] + 20449[SUP]2[/SUP], this information is sufficient to find the factors. We can search by some sort of variation of Fermat's method: Check to see if F5  n[SUP]2[/SUP] is a perfect square for all n starting with 65536 and decreasing n by 1 each time. Most candidates can be eliminated as perfect squares by modular considerations. For example, any perfect square must be congruent to 0 or 1 mod 3. Also congruent to 0, 1, or 4 mod 5, and congruent to 0, 1, 2, or 4 mod 7. Each prime number will eliminate about half of the remaining candidates, on average, so sieving will result in a very small remaining number of candidates that must be tested by seeing if they are equal to their integer square roots squared. This sieving process can be table driven, and offers the possibility of searching a large number of candidates very quickly. Unfortunately, the search space is still huge, and the region searched efficiently, corresponding to n being fairly close to 65536, will still be a miniscule fraction of the entire search space for larger Fermat numbers. So we enhance the search as follows: Consider the set of all primes congruent to 1 mod 4, and include 2 as well. Now multiply F5 by various distinct products of these primes and look for representations as sums of squares. So we can work with 2*F5, or 5*F5, eventually getting to 10*F5 = 2*5*F5. The integer squareroot of 10*F5 is 207243, so we take this as our initial value of n and decrease by 1. This time, we don't have to search too far to discover that 10*F5  207241[SUP]2[/SUP] is a perfect square, so that 10*F5 = 207241[SUP]2[/SUP] + 917[SUP]2[/SUP]. Several representations of 10*F5 will already be known because of the known representations of 5 = 2[SUP]2[/SUP] + 1[SUP]2[/SUP], 2 = 1[SUP]2[/SUP] + 1[SUP]2[/SUP], and F5 = 65536[SUP]2[/SUP] + 1[SUP]2[/SUP], but it will be observed that we have added a previously unknown representation which will be sufficient to crack F5 into its new factors. So what is the advantage of using these multipliers? For one thing, if N is a product of k distinct nonrepeated prime factors, N will have 2[SUP]k1[/SUP] representations as a sum of squares, so many factors in N gives us more solutions to potentially find. F12 has 6 known prime factors and 1 known composite factor, so F12 has 64 known representations of the form a[SUP]2[/SUP] + b[SUP]2[/SUP] (where we assume a > b > 0), but given that the composite factor is known not to be a prime power, F12 must have at least 128 such representations. Any discovery of one more representation beyond the 64 known ones will give further information about the factors of F12. We can increase the number of representations enormously by multiplying F12 by a product of a large number of primes congruent to 1 mod 4, but unfortunately, each additional factor also increases the size of the search space. But there are many multipliers that can be used to search, and one might try these multipliers in some sort of systematic manner. I'll have more to say later, but hopefully this at least describes the germ of the idea. It would be interesting to hear any comments.[/QUOTE] So, toying with this idea further...how hard would it be to make a script to do a false sieve for this? I imagine (no experience with programming beyond BASIC) that one could create a script to find all factors of numbers of the form F(X)(sqrt(F(X)1)n) below, say, 1M, then either a) write the n value to postsieve.txt if all factors are to an even power, then increase the n value by 1 and continue or b) increase the n values by one and continue without writing. @RDS please, don't bother lecturing me about the fact that this is not a feasible effort. If you care to explain how high the n value would be expected to reach before success as a function of X or F(X), however, feel free, I would enjoy seeing how that would be approximated. Caution: F(X) is assumed to be the Xth Fermat number. 
[QUOTE=CRGreathouse;354846]You're talking about Mr. P1, right? I laughed.[/QUOTE]
:smile:. 
You all seem like you are sure that Mr. Guy will die in 3 years... :smile:
Who the hack cares about factors for Fermat numbers? I wish that none would be found till 2026 and Mr. Guy lose the bet, but personally pay the 20 bucks with his own hand after the expiration in 2026... 
[QUOTE=LaurV;354907]You all seem like you are sure that Mr. Guy will die in 3 years... :smile:[/QUOTE]
:no: I cannot believe I've just read that. Rude? Illogical? Both? 
the tyranny of evil men
Always wanted to find this speech, but it is too obscure for people to know. And it's a shame. A hundred times more people probably think in terms of, say, "Five Easy Pieces", you know, or Jules"I'mtryingrealhardtobetheshepherd" from Pulp Fiction. (And by all means, go ahead thinking that we are the tyranny of evil men, complete with a Mr.9 mm in your face while reading from the Bible. "[URL="http://www.youtube.com/watch?v=vMN5uQhFRo"]But that sh!t ain't the truth either[/URL]." "Go.")
But here's an explanation that really matters, in my opinion: [YOUTUBE]bDtMrVsglD0[/YOUTUBE] 
[QUOTE=Batalov;354914]A hundred times more people probably think in terms of, say, "Five Easy Pieces"[/QUOTE]
Do you mean the Houston Comets? 
[QUOTE=c10ck3r;354861]So, toying with this idea further...how hard would it be to make a script to do a false sieve for this? I imagine (no experience with programming beyond BASIC) that one could create a script to find all factors of numbers of the form F(X)(sqrt(F(X)1)n) below, say, 1M, then either a) write the n value to postsieve.txt if all factors are to an even power, then increase the n value by 1 and continue or b) increase the n values by one and continue without writing.
@RDS please, don't bother lecturing me about the fact that this is not a feasible effort. If you care to explain how high the n value would be expected to reach before success as a function of X or F(X), however, feel free, I would enjoy seeing how that would be approximated. Caution: F(X) is assumed to be the Xth Fermat number.[/QUOTE] Can someone at least confirm that the above equation would be right for finding these types of numbers (sums of squares for Fermat numbers)? 
[QUOTE=c10ck3r;354861]So, toying with this idea further...how hard would it be to make a script to do a false sieve for this? I imagine (no experience with programming beyond BASIC) that one could create a script to find all factors of numbers of the form F(X)(sqrt(F(X)1)n) below, say, 1M, then either a) write the n value to postsieve.txt if all factors are to an even power, then increase the n value by 1 and continue or b) increase the n values by one and continue without writing.
Caution: F(X) is assumed to be the Xth Fermat number.[/QUOTE] We actually want to look at the expression F(X)(sqrt(F(X)1)n)[SUP]2[/SUP] (note the exponent) and determine if it is a perfect square rather than find its factors. The value for n = 0 is trivially the square 1, so we start at n = 1 and increase it. In general, we can eliminate nonsquares by sieving. Any square must be congruent to 0, 1, or 4 mod 8, for example, so any values of F(X)(sqrt(F(X)1)n)[SUP]2[/SUP] which are congruent to 2, 3, 5, 6, or 7 mod 8 need not be further considered. This immediately eliminates all values of n congruent to 2 mod 4. Next we consider that any square must be congruent to 0 or 1 mod 3. This eliminates all values of n congruent to 1 mod 3. (Or we could say congruent to 0, 1, 4, or 7 mod 9 to eliminate more values of n.) In general, we set up a table for each prime p that we use for sieving that allows us to infer from n's congruence class mod p whether the above expression is a nonsquare or possibly a square. After we sieve with a reasonable number of different p's (Lehmer used all p's less than or equal to 127 in his Delay Line Sieve, later increased to 157) we expect that the few remaining n values are good candidates for generating perfect squares in the above expression. We can then test those candidates for each remaining n by taking the integer square root of F(X)(sqrt(F(X)1)n)[SUP]2[/SUP], squaring that, and seeing if it is equal to the original expression. I have coded such a sieve in C, but to optimize it, the sieving routines should be done in assembly language. It would be a good learning exercise, though, to code it in a higher level language. The integer squareroot function is only called occasionally, and the C routine in GMP or MPIR would probably work just fine. Or you could just write the candidate n values to a list (there shouldn't be very many) and check to see if the resulting expression is a square using PARI or some other computational package. Try it with F5. You are sieving values of F5  (65536  n)[SUP]2[/SUP] = 1 + 131072*n  n[SUP]2[/SUP] by all small primes p to eliminate all values of n for which this expression is not a perfect square mod p. If you set up an array to check all n values from 1 to say, 20,000, you should find that the only value of n which gives you a perfect square will be 3272 = 65536  62264. There is more to my method than this, I suggested working in general on trying to find sums of squares representations of N, where N could be a Fermat number, or a composite factor of a Fermat number, or either of these two possibilities multiplied by a product of distinct prime factors which all have representations of sums of squares. Or N could be any other number whose factors are all equal to 1 mod 4. The possibilities of trying many multipliers takes this from being a deterministic method of order n[SUP]1/2[/SUP] to a probabilistic method of order n[SUP]1/4[/SUP]. The method of sieving was around for centuries before the Lehmers used an electronic device to automate and speed up the sieving. In general, we sieve on N  (sqrt(N)  n)[SUP]2[/SUP] where sqrt(N) is the integer truncation of the squareroot of N. 
Hum. Factoring a 1133digit number or primality testing a 2.58+ billiondigit number? Which will require more CPU time assuming an "average" number of factors and difficulty for the 1133digit number? My guess is that the factorization would require more CPU time in the long run assuming that it was "average" with today's knowledge and technology.
BUT...2 factors (pun intended) weigh in favor of doing the factorization: 1. It can be distributed across 1000s of computers. To the best of my knowledge, the primality test cannot. (At least not yet anyway.) 2. We might get lucky and factor it quickly. There would be no equivalent probability of "luckiness" for the primality test. It would be interesting to see a project set up specifically to fully factor F12. Even with many computers running it, I think it would be unlikely to complete its task in 3 years. It would become the equivalent of a "Riesel Sieve" or 17orbust type project where the goal is a finite one but will likely take decades to finish. Personally, I find huge finite projects to be much more interesting than openended ones that have no purpose other than to just find primes until they get tired of searching for them. 
third factor: (in favor of factorization)
3. you may finish the primality test and find out that F33 is composite, therefore you are back at the starting point (edit: which would  most probably  be the case, because the general belief/heuristic/probabilistic/etc is that all fermat numbers are composite, except the first 5). 
[QUOTE=gd_barnes;355031]It would be interesting to see a project set up specifically to fully factor F12.[/QUOTE]
Or make it a simple 3year challenge: "Save Richard Guy $20" project. Hmm, not catchy. How 'bout the "save Richard Guy from bankruptcy" project. 
[QUOTE=Prime95;355035] "Save Richard Guy $20" [/QUOTE]
What do you mean by that? If I would be the lucky son of the gun to factor some Fm, then I [U]WILL[/U] take his $10, and I will [U]expressly[/U] ask him to send me a signed check... After 50 years that will worth a fortune! :w00t: I will make my grandchildren happy and they won't need to work for all their lives... 
Clearly, Richard Guy should have bet no less than $1000. Who does he think he is, offering to split his original friendlybet of $20? I pass by 20dollar bills every day without even bothering to pick them up. Chump change!

[QUOTE=Jayder;355040]Clearly, Richard Guy should have bet no less than $1000. Who does he think he is, offering to split his original friendlybet of $20? I pass by 20dollar bills every day without even bothering to pick them up. Chump change![/QUOTE]Would a small amount from Erdös or Knuth have been chump change too?

[QUOTE=only_human;355042]Would a small amount from Erdös or Knuth have been chump change too?[/QUOTE]
If I would have a $10 cheque (why is this underlined in red?) from any of those two, first I would put it in a frame over my bed, to look at it every evening and morning. Then second, I would put it on ebay for bidding, starting from, say, half a million. Third, I will watch the bets going to 6 or 7 digits... Then fourth, I would say "sorry guys, it is not for sale, it was a joke"... (well, no character, as someone just tried to tell me... hehe :razz:) (edit: fifth: I would tell my boss "funk you! even Erdos/Knuth paid me 10 bucks!" :rofl:) 
[QUOTE=Prime95;355035]"Save Richard Guy $20" project.[/QUOTE]
:explode: 
It is anecdotal that Knuth has a fixed price for valid errata for his books and that most of his checks (for whatever amount, $2.56 or something?) are never cached. One must have quite limited imagination that people actually work to get [I]that money[/I], literally. For some people it would have paid for 2 or say 3 minutes of their daytime job.

[QUOTE=Batalov;355054]It is anecdotal that Knuth has a fixed price for valid errata for his books and that most of his checks (for whatever amount, $2.56 or something?) are never cached. One must have quite limited imagination that people actually work to get [I]that money[/I], literally. For some people it would have paid for 2 or say 3 minutes of their daytime job.[/QUOTE]Knuth always used to pay $2.56 for each reported bug and 32¢ for each accepted enhancement. He gave up doing so some years back because so many people posted images of his checks and fraudsters tried to use the bank details.
My check for $2.88 from DEK is framed and hanging on the wall near the reproductions of the RSA129 and RSA140 counterparts. Paul (Actually, most checks from Knuth are cached AFAIK. Very few were cashed :smile: ) 
Of course, Serge. That is why I posed my question because I was trying to understand if the poster really thought that commensurate pay was intended. Humor falls flat so often on the attenuated media.
Richard Guy, of multiple editions of Unsolved Problems in Number Theory, would be a particularly good example of the rewards not intended to be drudge work remuneration. Here is a math problem that offered a Goose as a reward. [URL="http://mathoverflow.net/questions/66084/openproblemswithmonetaryrewards"]Open problems with monetary rewards[/URL] [noparse][mathoverflow.net][/noparse] [QUOTE]You may also be interested in the following picture of Mazur awarding Per Enflo a live goose as promised for the solution of the approximation problem. [url]http://en.wikipedia.org/wiki/File:MazurGes.jpg[/url][/QUOTE] [url]http://en.wikipedia.org/wiki/Per_Enflo[/url] [QUOTE]In 1972, Per Enflo constructed a separable Banach space that lacks the approximation property and a Schauder basis. In 1972, Mazur awarded a live goose to Enflo in a ceremony at the Stefan Banach Center in Warsaw; the "goose reward" ceremony was broadcast throughout Poland.[/QUOTE] [url]http://en.wikipedia.org/wiki/Scottish_cafe[/url] 
If it wasn't clear, I was being sarcastic. Actually, I was poking fun at the post before mine.

[URL="http://xkcd.com/816/"]how to get rich[/URL]...

[QUOTE=LaurV;355066][URL="http://xkcd.com/816/"]how to get rich[/URL]...[/QUOTE]
Would a moderator please move this thread to the crank subforum? It has become devoid of information content. 
[QUOTE=philmoore;355011]We actually want to look at the expression F(X)(sqrt(F(X)1)n)[SUP]2[/SUP] (note the exponent) and determine if it is a perfect square rather than find its factors. The value for n = 0 is trivially the square 1, so we start at n = 1 and increase it. In general, we can eliminate nonsquares by sieving. Any square must be congruent to 0, 1, or 4 mod 8, for example, so any values of F(X)(sqrt(F(X)1)n)[SUP]2[/SUP] which are congruent to 2, 3, 5, 6, or 7 mod 8 need not be further considered. This immediately eliminates all values of n congruent to 2 mod 4. Next we consider that any square must be congruent to 0 or 1 mod 3. This eliminates all values of n congruent to 1 mod 3. (Or we could say congruent to 0, 1, 4, or 7 mod 9 to eliminate more values of n.) In general, we set up a table for each prime p that we use for sieving that allows us to infer from n's congruence class mod p whether the above expression is a nonsquare or possibly a square. After we sieve with a reasonable number of different p's (Lehmer used all p's less than or equal to 127 in his Delay Line Sieve, later increased to 157) we expect that the few remaining n values are good candidates for generating perfect squares in the above expression. We can then test those candidates for each remaining n by taking the integer square root of F(X)(sqrt(F(X)1)n)[SUP]2[/SUP], squaring that, and seeing if it is equal to the original expression.
I have coded such a sieve in C, but to optimize it, the sieving routines should be done in assembly language. It would be a good learning exercise, though, to code it in a higher level language. The integer squareroot function is only called occasionally, and the C routine in GMP or MPIR would probably work just fine. Or you could just write the candidate n values to a list (there shouldn't be very many) and check to see if the resulting expression is a square using PARI or some other computational package. Try it with F5. You are sieving values of F5  (65536  n)[SUP]2[/SUP] = 1 + 131072*n  n[SUP]2[/SUP] by all small primes p to eliminate all values of n for which this expression is not a perfect square mod p. If you set up an array to check all n values from 1 to say, 20,000, you should find that the only value of n which gives you a perfect square will be 3272 = 65536  62264. There is more to my method than this, I suggested working in general on trying to find sums of squares representations of N, where N could be a Fermat number, or a composite factor of a Fermat number, or either of these two possibilities multiplied by a product of distinct prime factors which all have representations of sums of squares. Or N could be any other number whose factors are all equal to 1 mod 4. The possibilities of trying many multipliers takes this from being a deterministic method of order n[SUP]1/2[/SUP] to a probabilistic method of order n[SUP]1/4[/SUP]. The method of sieving was around for centuries before the Lehmers used an electronic device to automate and speed up the sieving. In general, we sieve on N  (sqrt(N)  n)[SUP]2[/SUP] where sqrt(N) is the integer truncation of the squareroot of N.[/QUOTE] This method is totally dominated by faster methods. It is pointless. 
Offtopic.
Mr. Silverman, R.D., can you update your profile at Scopus? Thank you in advance, Carlos 
Back to the original request, how hard would it be to test the cofactors of F25, F26, etc for primality? If any are prime then the Fermat number would be fully factored.
Chris 
They are likely already tested. It is far from hard  a few days' work. Run them and send the residues to W.Keller. (W.Keller for some reason is resistant to adding extra asterisks to some tested cofactors, above a certain limit set by him. Maybe, he wants to have double and triplechecked residues.)

[QUOTE=R.D. Silverman;355074]This method is totally dominated by faster methods. It is pointless.[/QUOTE]
I suggested this as a learning exercise. Learning is never pointless, IMHO. 
[QUOTE=pinhodecarlos;355079]Offtopic.
Mr. Silverman, R.D., can you update your profile at Scopus? Thank you in advance, Carlos[/QUOTE] My email address has been updated 
[QUOTE=chris2be8;355094]Back to the original request, how hard would it be to test the cofactors of F25, F26, etc for primality? If any are prime then the Fermat number would be fully factored.
Chris[/QUOTE] F25, F26, and F27 have been tested. The cofactors are composite. 
[QUOTE=chris2be8;355094]Back to the original request, how hard would it be to test the cofactors of F25, F26, etc for primality? If any are prime then the Fermat number would be fully factored.
Chris[/QUOTE] Someone correct me if I'm wrong but even if the cofactors were PRP, they could not be proven prime, which means that the Fermat number would not be "proven" as fully factored. I think that for Mr. Guy to win his bet, the only (un)reasonable chance is to fully factor one of F12/F13/F14. 
Although I wouldn't put any money on it myself, Guy could still win his bet if someone manages to build a quantum computer with enough qbits to run Shor's algorithm on F12. This, of course, also assumes that they could get the result and publish it before government goons swept in, shouted "national security", and confiscated their hardware.

[QUOTE=PBMcL;355237]Although I wouldn't put any money on it myself, Guy could still win his bet if someone manages to build a quantum computer with enough qbits to run Shor's algorithm on F12. This, of course, also assumes that they could get the result and publish it before government goons swept in, shouted "national security", and confiscated their hardware.[/QUOTE]
I think you took too many quaaludes last night. 
[QUOTE=gd_barnes;355197]Someone correct me if I'm wrong but even if the cofactors were PRP, they could not be proven prime, which means that the Fermat number would not be "proven" as fully factored.
I think that for Mr. Guy to win his bet, the only (un)reasonable chance is to fully factor one of F12/F13/F14.[/QUOTE] You are correct about the larger cofactors, even the cofactor of F17 is over 39000 digits, far beyond anything proven so far by ECPP. But I wouldn't consider the chance of completely factoring F12/F13/F14 to be completely unreasonable. If ECM turned up a factor of 65 or 70 digits of F12 by ECM, and no smaller factor had been missed, we would expect the cofactor to have around an 11% chance of being prime. For F13, the probability drops to about half that, and half that again for F14. Unfortunately, after searching to 60 digits, we can only expect about a 1 in 7 chance of finding another factor searching to 70 digits, so maybe we can consider our overall chance of success with F12 to be around 1.5%. 
Primality proofs were for some stretches of time divorced from factorizations, e.g. in the Cunningham project. It is likely that most people would agree that the factorizations of 2^13347311+1 and 2^13372531+1 are complete (some may not): they both are = 3 · "large PRP".
People are understandably much more strict when a primality proof is based on a PRP as a helper, e.g. the KP proof of (8·10^2781171)/9 was definitely incomplete (before prp7334 was certified with Primo), though most would agree that the N1 cofactor factorization was complete: Φ[SUB]13905[/SUB](10) = 83431 · 333721 · prp7334 . 
[QUOTE=R.D. Silverman;355252]I think you took too many quaaludes last night.[/QUOTE]
I'm curious, Bob. What are you referring to? The quantum computer part (yes, exceedingly unlikely in the next 3 years) or the govt. goons part (breaking RSA might be something they'd want to keep to themselves.) Please elaborate. 
[QUOTE=PBMcL;355264]I'm curious, Bob. What are you referring to? The quantum computer part (yes, exceedingly unlikely in the next 3 years) or the govt. goons part (breaking RSA might be something they'd want to keep to themselves.) Please elaborate.[/QUOTE]
Both ideas are delusional. And you didn't discuss the gov't keeping an RSA break secret. You did discuss their seizing a quantum computer. (This assumes that it was developed in the public domain). (1) If someone builds such a computer, the fact of its existence will not be secret. It will be in the public domain. And the Bernstein vs. State Dept. case makes it clear that the government can't suppress publishing. The mere fact that such a computer exists will kill RSA all by itself. (2) The government can not seize private property except as a result of a criminal proceeding or under eminent domain. Neither applies. 
[QUOTE=R.D. Silverman;355270](2) The government can not seize private property except as a result of a criminal proceeding or under eminent domain. Neither applies.[/QUOTE]
Are you [I][U]absolutely[/U][/I] sure on these claims? 
(obliquely relevant to the few last posts)
In case anyone was watching  [URL="http://www.cbs.com/shows/elementary/video/EC87253785AE1FD4C2EF7A8CD60980F2/elementarysolveforx/"]yesterday's Elementary[/URL] made me chuckle a few times. It had to do with some mathematicians being randomly killed... Holmes got it!  "They were getting close to solving P vs NP problem." /gasp/ I will not give away the (rather obvious) conclusion.
It almost follows the scenario of the last few posts. (or not. See it. If anything, you will be amused.) 
[QUOTE=R.D. Silverman;355270](2) The government can not seize private property except as a result of
a criminal proceeding or under eminent domain. Neither applies.[/QUOTE] :missingteeth: They've done more for less. I don't know what cover story would be used, I don't know what possiblysecret interpretation of law would cover for it, but I'm quite sure that this would be attempted in the unlikely chance that someone in the 'outside world' made [i]ex nihilo[/i] a cryptographicallyuseful quantum computer. 
Okay, I have a "related" question insomuch as it pertains to the attempt to factor Fermat numbers: is using the go command in PFGW the best way to test for divisibility after a Proth prime has been found? If not, what is? Thanks!

Also, does anyone know how far F12, F13, and F14 have been p1 tested and/or have the .bu files for the deepest known test?

I noticed that PrimeNet gives ECMF jobs with B1=8e8 for F12 (at least when asked using Manual testing). However, the statistics on [URL="http://mersenne.org/report_ECM/"]http://mersenne.org/report_ECM/[/URL] only have the curve count up to 26e7. Although I am not planning on devoting too much resources on this, I am curious to know how much ECM has been poured to F12 via PrimeNet. Is there a way to get the information (easily) from the server?

Try [url]http://mersenne.org/report_ECM/index_new.php[/url]
I'm not sure if the 360,000 curves required is correct. 
[QUOTE=Prime95;365060]Try [url]http://mersenne.org/report_ECM/index_new.php[/url]
I'm not sure if the 360,000 curves required is correct.[/QUOTE] Thank you! I see there were way more curves at B1=8e8 than I had expected. 360,000 seems a bit high for B1=8e8, but then again the memory requirement for something like B1=2e9 might be too high for the current average computer running ECM on these. I noticed that running stage 2 on mprime with B1=85e7 uses 551MB of memory (when mprime is allowed to have 4GB). I did not yet try B1=2e9 nor running stage 2 on GMPECM. 
[QUOTE=rajula;365062]I see there were way more curves at B1=8e8 than I had expected.[/QUOTE]
Remember, the ECM report is a bit odd. The 800M column includes the work done in the 260M column. That is, the 112,000 curves at 260M is roughly equivalent to 36,000 curves at 800M. When the 260M column is marked "Done", the 800M column starts counting from 36,000. 
[QUOTE=Prime95;365064] When the 260M column is marked "Done", the 800M column starts counting from 36,000.[/QUOTE]
This I did not know (or had forgotten). It explains the unexpectedly high count. 
Reviving an old thread from 2013, since the deadline is now almost exactly a month away.
From the first post in this thread: [QUOTE]A Desperate appeal!! by Richard K. Guy At the time of my 80th birthday, when John Conway and I were writing The Book of Numbers, John bet that there would be no further complete factorizations of Fermat numbers in the next 20 years. I bet him $20.00 that there would be. Now that my 100th birthday is only 3 years away, I'm beginning to get a bit worried! I'll even go so far as to split the $20.00 with anyone who completely factors F12 or F13 or any larger Fermat number!! (Note: The actual deadline is September 30, 2016; please compare UPINT, 3rd edition, page 16.)[/QUOTE] Nearly three years have passed, and it looks like Professor Guy's $20 is doomed. I think there are two methods to try to find factors of Fermat numbers. One is sort of the equivalent of trial factoring (searching ranges of N and k). The [URL="http://www.mersenneforum.org/forumdisplay.php?f=133"]FermatSearch subforum[/URL] has details for reserving ranges. I'm not sure what software is used for that. The other method is ECM testing, using mprime/Prime95. F12 is now at the stage where ECM testing is looking for factors in the range of about 65 digits (about 215 bits). Each curve takes about 3.1 hours on a 2.3 GHz Haswell architecture. F13 is now at the stage where ECM testing is looking for factors in the range of about 60 digits (about 200 bits). Each curve takes about 1.7 hours on a 2.3 GHz Haswell architecture. Finding a new factor would be interesting in its own right, albeit rather improbable since you might have to run tens of thousands of curves or even hundreds of thousands before you hit a factor at this level, if one even exists. And even then, saving Professor Guy from financial ruin would require the cofactor to be prime as well. Nevertheless, in case anyone wants to make a token effort, here are some lines you could add to your worktodo.txt (requires stopping the program first, or to avoid having to do that, just create a worktodo.add file that the program will automatically read and append to worktodo.txt the next time it does a disk write). There are 10 curves for F12 (2[SUP]4096[/SUP]+1) and 18 curves for F13 (2[SUP]8192[/SUP]+1), so maybe 30 hours of CPU time for each on a modern Haswell machine. F12 needs 535MB of memory in stage two, and F13 needs 579MB. [CODE] ECM2=1,2,4096,1,800000000,80000000000,10,"114689,26017793,63766529,190274191361,1256132134125569,568630647535356955169033410940867804839360742060818433" ECM2=1,2,8192,1,260000000,26000000000,18,"2710954639361,2663848877152141313,3603109844542291969,319546020820551643220672513" [/CODE] Here is the [URL="http://www.mersenne.org/report_ecm/?txt=0&ecm_lo=1000&ecm_hi=2000&ecmnof_lo=1000&ecmnof_hi=2000"]ECM status report[/URL] from PrimeNet. 
[QUOTE=GP2;440967]Reviving an old thread from 2013, since the deadline is now almost exactly a month away.
Here is the [URL="http://www.mersenne.org/report_ecm/?txt=0&ecm_lo=1000&ecm_hi=2000&ecmnof_lo=1000&ecmnof_hi=2000"]ECM status report[/URL] from PrimeNet.[/QUOTE] I'm in :smile: 
Luigi, can you keep the desperate appeal alive by reposting it on fermatsearch.org ?
Now, both W.Keller's site and prothsearch joined the choir invisible. Shuffled off their mortal coil, ran down the curtain. Their metabolic processes are now 'istory! 
[QUOTE=Batalov;440996]Luigi, can you keep the desperate appeal alive by reposting it on fermatsearch.org ?
Now, both W.Keller's site and prothsearch joined the choir invisible. Shuffled off their mortal coil, ran down the curtain. Their metabolic processes are now 'istory![/QUOTE] Done! :smile: 
Maybe contact yoyo. What would be the ecm standalone client flags?
EDIT: I want to help or using Prime95 or using ecm. For Prime95 I only want to use 2 of the 4 cores available from my laptop but I need someone to support me on getting the worktodo.txt setup or the prime.ini (not sure about the file names). For ecm what are the command lines I need to setup to run some curve on those two Fermat numbers. 
[QUOTE=pinhodecarlos;441054]Maybe contact yoyo. What would be the ecm standalone client flags?[/QUOTE]
I already contacted them. Unfortunately, the parameters for the ECM search on Fermat numbers are out of their reach. 
[QUOTE=ET_;441055]I already contacted them. Unfortunately, the parameters for the ECM search on Fermat numbers are out of their reach.[/QUOTE]
Share with me those parameters and I will try to bring some support from SETI.USA team and also from myself. 
Here's my doubt.
[B]worktodo.txt content[/B] [CODE] [Worker #1] ECM2=1,2,4096,1,800000000,80000000000,1000,"114689,26017793,63766529,190274191361,1256132134125569,568630647535356955169033410940867804839360742060818433" [Worker #2] ECM2=1,2,4096,1,800000000,80000000000,1000,"114689,26017793,63766529,190274191361,1256132134125569,568630647535356955169033410940867804839360742060818433" [/CODE] [B]local.txt content[/B] [CODE] OldCpuSpeed=2864 NewCpuSpeedCount=0 NewCpuSpeed=0 RollingAverage=1000 RollingAverageIsFromV27=1 ComputerGUID=ec875508223b69f7b24450ba96978009 [B]WorkerThreads=2[/B] Affinity=100 [B]ThreadsPerTest=2[/B] [/CODE] On the local.txt content flag in bold, I want to run one worker per core. How do I set it up? Appreciated some support. Carlos 
[QUOTE=pinhodecarlos;441057]Here's my doubt.
On the local.txt content flag in bold, I want to run one worker per core. How do I set it up? Appreciated some support.[/QUOTE] [CODE] WorkerThreads=[B]<<number of cores you have>>[/B] ThreadsPerTest=[B]1[/B] [/CODE] 
I think it might be ever so slightly more efficient to use a B2/B1 multiplier of 200, since that gives roughly equal times for stage 1 & 2 (which should be optimal according to RDS).
Anyone care to compute the expected curves for both configuration and see whether it makes any difference? 
[QUOTE=axn;441061]I think it might be ever so slightly more efficient to use a B2/B1 multiplier of 200, since that gives roughly equal times for stage 1 & 2 (which should be optimal according to RDS).
Anyone care to compute the expected curves for both configuration and see whether it makes any difference?[/QUOTE] I suppose the GIMPS manual submission page should do the math for you, crediting the appropriate number of curves for a definite B1... Anyway, I will gladly wait for some more tests. 
[QUOTE=pinhodecarlos;441056]Share with me those parameters and I will try to bring some support from SETI.USA team and also from myself.[/QUOTE]
When I set up the estimates for F12F29, Yoyo answered that such workunits were much too heavy for them. But if we limit our search on F12 nd F13 (no more than half GB of RAM and 6 hours per WU) I think we may receive some help from other teams. 
[QUOTE=ET_;441071]I suppose the GIMPS manual submission page should do the math for you, crediting the appropriate number of curves for a definite B1... Anyway, I will gladly wait for some more tests.[/QUOTE]
There is no need to use the manual submission page, Fermat results are reported to PrimeNet in the usual way. 
[QUOTE=GP2;441104]There is no need to use the manual submission page, Fermat results are reported to PrimeNet in the usual way.[/QUOTE]
Correct. I used manual submission to pinpoint the differences when I worked on different B values :redfaace: 
So we currently have
F12: 57,363 of 360,000 for 65 digits F13: 82,193 of 112,000 for 60 digits Now we probably can take F13 to the next level, and start on 65 digits. But what happens when we get one or both to the end of the 65 digits level without finding a factor? 
[QUOTE=Joe O;441178]So we currently have
F12: 57,363 of 360,000 for 65 digits F13: 82,193 of 112,000 for 60 digits Now we probably can take F13 to the next level, and start on 65 digits. But what happens when we get one or both to the end of the 65 digits level without finding a factor?[/QUOTE] We start on the 70 digits level. These numbers need the >100 digit level before NFS we don't stand a chance of progressing beyond ECM any time soon without a new algorithm. 
[QUOTE=henryzz;441187]We start on the 70 digits level. These numbers need the >100 digit level before NFS we don't stand a chance of progressing beyond ECM any time soon without a new algorithm.[/QUOTE]
But what is the "official" B1 value for the 70 digits level? the [URL="http://www.mersenne.org/report_ecm"]ECM report page[/URL] doesn't show it. Also, are we certain that the mprime program is able to go that high? 
[QUOTE=GP2;441189]But what is the "official" B1 value for the 70 digits level? the [URL="http://www.mersenne.org/report_ecm"]ECM report page[/URL] doesn't show it.
Also, are we certain that the mprime program is able to go that high?[/QUOTE] The ECM report page used to say "B1=110,000,000" some years ago. Then it was modified to "B1 = 260,000,000". Now it is at "B1 = 800,000,000" (BTW, I never understood why it's B1 = 800M instead of 850M as proposed by GMPECM gurus). So I expect George (or someone else for him) will add the new range as soon as it becomes available :smile: 
[QUOTE=ET_;441191]The ECM report page used to say "B1=110,000,000" some years ago.
Then it was modified to "B1 = 260,000,000". Now it is at "B1 = 800,000,000" (BTW, I never understood why it's B1 = 800M instead of 850M as proposed by GMPECM gurus). So I expect George (or someone else for him) will add the new range as soon as it becomes available :smile:[/QUOTE] The bounds are potentially different due to the b2=100*b1 which is much smaller than gmpecm would use. 
[QUOTE=henryzz;441193]The bounds are potentially different due to the b2=100*b1 which is much smaller than gmpecm would use.[/QUOTE]
That's why I expected a bigger B1, not a smaller one... 
[QUOTE=ET_;441208]That's why I expected a bigger B1, not a smaller one...[/QUOTE]The work curve has a very flat minimum. It doesn't matter very much whether you run more curves at a lower B1 or fewer at a higher one, as long as you don't push it too far. To an excellent approximation, the product B1*#curves is the figure of merit.

[QUOTE=xilman;441212]The work curve has a very flat minimum. It doesn't matter very much whether you run more curves at a lower B1 or fewer at a higher one, as long as you don't push it too far. To an excellent approximation, the product B1*#curves is the figure of merit.[/QUOTE]
Does the above apply only to stage one? What about the B2 bound? For P−1, I think it normally doesn't find factors beyond the bound B2, unless exceptionally by BrentSuyama extension. Does ECM routinely find factors beyond the B2 bound? What is the effect of increasing the B2 bound, does it merely increase the probability of finding a factor within the B2 bound? 
I don't think that it will be possible to factor completely any more Fermat numbers with the current technology.
Either computational power needs to be increased either by using multiple computers or by using micro processor speed or improvements with in factoring algorithms need to be made or quantum computers  Shor's algorithm should need to be built out effectively and then efficiently up (Integer Factorization Problem Improvements ⇔ Discrete Logarithm Problem Improvements)! (Google Search Engine features much more symbols than character map). If 2,1024+ did not drop off a small 40digit factor, it would be only factored recently, right now. 2,1039, first kilo bit SNFS factorization had been done how ever as early as Monday 21 May 2007. It had been very lucky enough that the 564 digit cofactor of 2,2048+ is being prime number candidate, or it would also have been infeasible  computationally out of reach too, right now! Given that fixed penultimate prime factor candidate of 2,2048+ which is being a smaller number candidate  not  not  larger number candidate! 
[QUOTE=Raman;442973]I don't think that it will be possible to factor completely any more Fermat numbers with the current technology.
Either computational power needs to be increased either by using multiple computers or by using micro processor speed or improvements with in factoring algorithms need to be made or quantum computers  Shor's algorithm should need to be built out effectively and then efficiently up (Integer Factorization Problem Improvements ⇔ Discrete Logarithm Problem Improvements)! (Google Search Engine features much more symbols than character map). If 2,1024+ did not drop off a small 40digit factor, it would be only factored recently, right now. 2,1039, first kilo bit SNFS factorization had been done how ever as early as Monday 21 May 2007. It had been very lucky enough that the 564 digit cofactor of 2,2048+ is being prime number candidate, or it would also have been infeasible  computationally out of reach too, right now! Given that fixed penultimate prime factor candidate of 2,2048+ which is being a smaller number candidate  not  not  larger number candidate![/QUOTE] In reality it is just a matter of luck. We find a few fermat factors each year. Any one of these could leave a prp cofactor. This happens for Mersenne numbers. The issue is that the fermat numbers get bigger much quicker and less factors are found. I don't know whether the cofactor has to be proven prime here. That would limit us to rather a small number of candidates. 
I ran two curves on F25 and Prime95 reported on 19 September, but the counter on the ECM Report page has not incremented from 450.
[Mon Sep 19 18:41:52 2016] UID: richs/Rich__Laptop, F25 completed 2 ECM curves, B1=1000000, B2=1000000, We4: 64119540, AID: B36AA7689965395BB40858081FC4**** What did I do wrong? 
Looks like there was no stage 2 run. Two curves at B1=1M were not enough to count as 1 curve at B1=1M, B2=100M

I'm still pursuing a particular approach to a (new) completed Fermat number factorization but the solution timeline is now tenuous. Is there any optimism, probabilistically speaking, of obtaining a factorization via conventional approaches?

Ok. Asked and answered.

[QUOTE=jwaltos;443701]I'm still pursuing a particular approach to a (new) completed Fermat number factorization but the solution timeline is now tenuous. Is there any optimism, probabilistically speaking, of obtaining a factorization via conventional approaches?[/QUOTE]
Monitor the [URL="http://www.mersenne.org/report_ecm/?txt=0&ecm_lo=2&ecm_hi=1000&ecmnof_lo=2&ecmnof_hi=1000"]ECM progress page[/URL] and you'll see the number of curves tested increasing by a few (less than ten) on most days. However, it will probably need tens of thousands of curves or hundreds of thousands.... so it's liable to take a while, and that's a necessary but not necessarily sufficient condition. On a 2.4 GHz Haswell machine, each curve for F12 takes maybe 3 hours and each curve for F13 takes maybe an hour (using mprime). 
In a few hours I will finish a run of 10 curves for F12. Will Prime95 automatically submit the results into server despite the fact that I didn't setup my client ID on it or should I manually submit them onto Prime95 webpage at ''MANUAL TESTING/RESULTS'' submission form?
Also how can I set on the txt files stage 2 to use 4GB? Running two cores on a 16GB machine. Thank you in advance. 
All times are UTC. The time now is 04:00. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2021, Jelsoft Enterprises Ltd.