![]() |
Factoring Question
Hi
If Im trial factoring a particular number (about M400000000) with Prime95, it takes my computer approximately the following time to complete a range: 2^60 to 2^61: 30 secs 2^61 to 2^62: 1 min (as expected, time is doubled) 2^62 to 2^63: 3 mins 30 secs! (much more than doubled time...) Is this normal? Does anyone know why this happens? I thought, time is just doubled with every bit... Thx for your answers Rde |
The number of factor candidates to check is approximately doubled with each bit, but the time your computer takes isn't quite so simple. You can run a benchmark (Options > Benchmark) to see how long your computer takes at the various factor sizes, which should help you make sense of why your computer takes more than twice as long for that transition.
|
I assume from those numbers that you're using a 32-bit OS.
[QUOTE=Rde;176968]Is this normal? Does anyone know why this happens?[/QUOTE] 2^63 to 2^64 fits in two words, while 2^64 to 2^65 takes three. From that effect alone I'd expect the time to be 3 minutes rather than 2. Further, operations on doublewords are usually built in while larger ones must be programmed, making them yet slower. |
Did a benchmark:
factors < 62 bits: 8ms 62 & 63 bit factors: 14ms factors > 63 bits: 35ms This corresponds very well with the actual times my computer needs. Still I dont fully understand these numbers although Greathouses explanation makes sense to me. I thought about something in this direction too. But is this enough to explain the numbers or are there other possible causes? And yes, Im using a 32-bit OS. Thx anyway for your quick answers Rde |
[quote=Rde;176981]Did a benchmark:
factors < 62 bits: 8ms 62 & 63 bit factors: 14ms factors > 63 bits: 35ms This corresponds very well with the actual times my computer needs. Still I dont fully understand these numbers although Greathouses explanation makes sense to me.[/quote]When the factor size gets within 2 bits of register size, it becomes necessary to use more instructions in each iteration to be sure that overflows, carries, sign propagation, and other phenomena related to register size are handled properly. That's why the 62- and 63-bit factor iterations take longer than those for smaller factors. When the factors become larger than that, it becomes necessary to use multiple register operations to perform steps that could be performed with only one register operation when the factors were smaller. There are also other considerations that make iterations take longer than merely twice the time for iterations using smaller factors. (Here, I'm assuming that the arithmetic is being done in 64-bit registers, but basically the same logic applies if the registers are 32-bit and there's a change from 2 registers per factor to 3 registers per factor. Longer factors require a greater number of instructions, including extra carry/overflow handling, per iteration, at certain factor sizes which are close to integral multiples of register size.) I'm guessing that you might not be familiar with programming machine-level/assembly-level instructions where such considerations have to be taken into account because each register operation has to be explicitly specified, rather than being "hidden" as they are for programming in a higher-level language such as C. |
Wow, thats an explanation :) Thx a lot.
And your right that Im not familiar with any assembly languages |
Just an extra note:
"32-bit OS" probably denotes the size of memory addresses used, not necessarily the size of registers (though the general-purpose registers may indeed be 32 bits long, and there might be double-register operations that treat an adjacent pair of 32-bit registers as though they constituted a single 64-bit register). Many (most? all?) current CPUs have 64-bit general-purpose registers, and have at least two different modes of operation: 32-bit and 64-bit -- each referring to the size of memory addresses used by memory-referencing instructions. A "32-bit" OS would be one that uses memory addresses of no more than 32-bits (maximum of 4GB memory), so a 64-bit CPU running a 32-bit OS would use only its 32-bit mode of operation for the OS's instructions. (History note: The original IBM 360 series had 32-bit general purpose registers, but used only 24-bit memory addresses -- for a maximum of 16MB addressable memory.) |
[QUOTE=cheesehead;177232](History note: The original IBM 360 series had 32-bit general purpose registers, but used only 24-bit memory addresses -- for a maximum of 16MB addressable memory.)[/QUOTE]
Yeah, but who's ever going to need more than 16 MB of RAM? I mean clearly power users will want more than 64 kB, but it's a pity to waste 32 bits just for them. :smile: |
[QUOTE=cheesehead;177232]
Many (most? all?) current CPUs have 64-bit general-purpose registers, and have at least two different modes of operation: 32-bit and 64-bit -- each referring to the size of memory addresses used by memory-referencing instructions. [/QUOTE] Definitely not all. One example leaps to mind, the Intel Atom CPU use in the so called netbooks, does not currently support 64 bit operations, just 32. |
[QUOTE=cheesehead;177232]
(History note: The original IBM 360 series had 32-bit general purpose registers, but used only 24-bit memory addresses -- for a maximum of 16MB addressable memory.)[/QUOTE] [QUOTE=CRGreathouse;177256]Yeah, but who's ever going to need more than 16 MB of RAM? I mean clearly power users will want more than 64 kB, but it's a pity to waste 32 bits just for them. :smile:[/QUOTE] Well when the 360 was in production (in the 1960s) 16 megabytes of core would have cost many millions of dollars so 24 bit addresses were plenty. Later models of the successor 370 line eventually did have to support XA (extended address) modes for full 32 bit addresses. In fact the IBM 360 was a lot better than many of it's contemporaries. For example the CDC 6600 (the original supercomputer) only had 18 bit addresses. Admittedly it was addressing 60 bit words, not bytes, so not quite as bad as it sounds but still it was a big expensive computer. |
Honestly, I don't understand why we're still addressing bytes rather than words. A modern processor can't pull anything smaller than 64 bits at a time, so why are we still using an addressing scheme that gives a number to each byte?
With a word addressing scheme, you wouldn't need 64 bit addresses until you exceeded 32 GB. |
| All times are UTC. The time now is 15:29. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.