-   Data (
-   -   Thinking out loud about getting under 20M unfactored exponents (

petrw1 2017-07-26 04:45

Thinking out loud about getting under 20M unfactored exponents

Breaking it down I'm thinking if each 100M range has less than 2M unfactored we have the desired end result.
Similarly if each 10M range has less than 200K unfactored...
or each 1M range has less than 20K unfactored...
or each 100K range has less than 2,000 unfactored.

So I did some Excel ciphering looking at:
- how many more factors are required in each range
- how many exponents need to be TF'd at the current bit level to get there (could require several bit levels to complete)
- how many GhzDays each assignment would take.
- I stopped at the 59M range thinking current GPU TF bit levels will factor adequately (most of the time) to get below my limits of interest here.

I did this for the 10M, 1M and 100K ranges.
Then I added it all up and came up with very roughly 250M GhzDays of TF with some ranges requiring up to 10 more bit levels of TF. WOW.

In perspective, my 1,000 per day GPUs would take 250K days: 685 years.

Oh dear; that's way more than I had expected.

Note: I only considered TF.
I understand that in some (many?) cases ECM (on lower exponents) and P-1 could find factors much quicker.

In either case it looks like this will be a very far off milestone.

=== Process where current B1=B2 first; then lowest current B1&B2.
=== Even when B2>B1 the current bounds are mostly quite low and factors are plentiful.
*** I've updated the suggested B1 based on version 30.8. B2 will be about 200x

Range ToGo B1=B2 TFBits Owns
[B]petrw1 is block reserving these ranges for P-1 over the next 4 months as I will be away.
=== end of block[/B]

6.0 25 0 72 takahashi
7.1 65 0 72 LaurV
7.3 65 0 72 Kruoli
7.7 60 0 72 Kruoli
7.8 52 0 72 Kruoli
8.1 55 0 72 Flaukrotist
8.5 10 0 72 Firejuggler
8.7 73 0 72 linament
9.5 39 0 72 Tha
9.9 96 0 72 Tha
10.4 111 0 72 Luminescence
10.5 26 0 72 ZhangRc
11.7 59 0 72 Chris
12.1 24 0 72 Lycorn
13.2 16 0 72 ZhangRc
14.5 20 0 72 Chris
15.5 140 0 72 Denial40 P-1 ... would like some TF help
17.0 122 0 73 Chris
17.7 53 0 72 Masser
17.9 61 0 73.8 Chris
19.5 73 0 72 Kruoli TF & axn P-1

20.2 4 14 74 petrw1
20.6 13 484 74 petrw1
20.8 34 504 74.2 petrw1 TF75 (needs help) and petrw1 P-1
21.7 25 819 74 Chris
21.8 28 679 74.7 Yves TF75 and petrw1 P-1
22.3 55 0 75 petrw1
25.3 48 766 75 Anton Repko

0PolarBearsHere 2017-07-26 07:48

It just means we need more GPUs.
For instance if we can get 1000 high end GPUs on it, we could get it done in under a year based on your maths. We just need to find an organisation with a spare 800K USD who had a sudden urge to generously donate GPUs to anyone that requests one.

VictordeHolland 2017-07-26 10:43

And what would this accomplisch?

petrw1 2017-07-26 16:03

[QUOTE=VictordeHolland;464195]And what would this accomplisch?[/QUOTE]

Absolutely nothing of consequence.
Nothing more than another milestone of interest to some.

S485122 2017-07-26 19:48

If your best tool is a factoring machine you view everything as as entities to be factored. :-)


chalsall 2017-07-26 20:03

[QUOTE=S485122;464237]If your best tool is a factoring machine you view everything as as entities to be factored. :-)[/QUOTE]

Just to reflect Jacob... Sometimes it is worth the effort to think about what other people are thinking about...

In addition to the Philips, are you familiar with the Roberson? The hex?

I have actually watched people slam screws into wood using a hammer, because the Philips screws' heads were stripped with a screw driver which was too small.

I actually learned some new words (containing many symbols, including (!*%$@***!!!)) from men who should have understood the simplicity of the situation.

For what that is worth....

CRGreathouse 2017-07-26 20:35

[QUOTE=VictordeHolland;464195]And what would this accomplisch?[/QUOTE]

I'm not sure what the OP has in mind, but I know that full factorizations of small Mersenne numbers are very useful. For example, they greatly speed up the [url=]non-sqrt-smooth part[/url] (which dominates computationally) of [url=]Feitsma's algorithm[/url] for listing 2-pseudoprimes. I've heard interest in extending his work beyond 2^64 so this isn't just academic.

As for finding individual factors, I don't know... I guess it just gives simpler/shorter certificates of compositeness.

Gordon 2017-07-26 23:01

[QUOTE=VictordeHolland;464195]And what would this accomplisch?[/QUOTE]

because they are there and because we can :-)

science_man_88 2017-07-26 23:25

[QUOTE=VictordeHolland;464195]And what would this accomplisch?[/QUOTE]

if done high enough, in theory, it could deplete the candidate factors for larger mersenne numbers a bit.

chalsall 2017-07-26 23:52

[QUOTE=science_man_88;464251]if done high enough, in theory, it could deplete the candidate factors for larger mersenne numbers a bit.[/QUOTE]

Yeah... In theory....

storm5510 2017-07-27 04:40

1 Attachment(s)
I believe just about everyone here recognizes the image I have attached. This ends at 2[SUP]80[/SUP]. I suppose some here could comfortably TF to this level in a reasonable period of time. Of course, I do not know what most would consider "reasonable."

The last I heard, a computer "generation" was in the area of 18 months. It is probably less now. It would take many generations of tech growth to get to the level the OP was writing about.

[U]Point[/U]: Let us do now what needs to be done now, and not think about the future.


All times are UTC. The time now is 21:21.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.