mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > Software

Reply
 
Thread Tools
Old 2011-10-16, 02:02   #1
ixfd64
Bemusing Prompter
 
ixfd64's Avatar
 
"Danny"
Dec 2002
California

11×211 Posts
Default 128-bit OS'es and GIMPS?

Some sources claim that Windows 8 or Windows 9 will have 128-bit editions. I do know that 64-bit versions of Windows are twice as fast as their 32-bit counterparts when it comes to TF (although not so in other areas). Can the same be said for 128-bit versions when compared to 64-bit versions?

Last fiddled with by ixfd64 on 2011-10-16 at 02:02
ixfd64 is offline   Reply With Quote
Old 2011-10-16, 02:47   #2
kladner
 
kladner's Avatar
 
"Kieren"
Jul 2011
In My Own Galaxy!

67×151 Posts
Default

"Can the same be said for 128-bit versions when compared to 64-bit versions? "

I am a dilettante on the level of very simple batch files and shortcut start lines. But I thought that the big advantage of 64 bit over 32 bit is the vastly expanded memory address space. Are there significant commercial software interests (besides Adobe) which would benefit from that next level jump? (Which might drive the OS development?) And would Prime128 be able to add immediate advantage over Prime64? What would the programming/processing advantages be re: 128 v 64?

Last fiddled with by kladner on 2011-10-16 at 02:48
kladner is offline   Reply With Quote
Old 2011-10-16, 03:50   #3
Christenson
 
Christenson's Avatar
 
Dec 2010
Monticello

5×359 Posts
Default

The big one would be the wider datapath and the more compact instruction stream...all these programs depend on accessing a lot of memory quickly...
Christenson is offline   Reply With Quote
Old 2011-10-17, 04:03   #4
LaurV
Romulan Interpreter
 
LaurV's Avatar
 
Jun 2011
Thailand

22×2,239 Posts
Default

128 bit OS can't do too much without a suitable 128 bit processor. You can "emulate" 128 bit operations even on 8-bit microcontrollers like Atmel or so (programming such controllers is what I am doing everyday for my job), but that is not real 128 bit "OS". First we need suitable 128 bit hardware, and I don't see much improvements on stuff like LL tests for example, where the double precision floats are more then enough (see discussion on a parallel thread about using specialized hardware to find primes or factors, there is an argument about single precision FFT against double precision FFT, which is very explanatory). Where we would get some speed would be integer calculus, that is for example trial factoring, if the new hardware (OS?) would allow 128 bits hardware multipliers, it could speed the trial factoring by a factor of 3 on the range of over 64 bits (I assume that now is done by a kind of Karatsuba algorithm to square numbers on two 64-bit registers with the result on 4 64-bit registers).

Last fiddled with by LaurV on 2011-10-17 at 04:05
LaurV is offline   Reply With Quote
Old 2011-10-17, 05:10   #5
axn
 
axn's Avatar
 
Jun 2003

2·5·479 Posts
Default

Quote:
Originally Posted by ixfd64 View Post
Some sources claim that Windows 8 or Windows 9 will have 128-bit editions.
The bit-ness of an OS/CPU is based on _memory_ addressability and not on computation word size. So, no, there won't be any 128-bit OSes for the foreseeable future. Not even the largest super computer requires the full capability of 64-bit addressing, let alone 128-bit.

Now, onto the computation word size. Native 128-bit integer and floating point data types can, in theory, speed up TF and LL (resp). However, AFAIK, none is planned for x86 line.
axn is offline   Reply With Quote
Old 2011-10-17, 05:22   #6
Jwb52z
 
Jwb52z's Avatar
 
Sep 2002

52×31 Posts
Default

If 128 bit processing won't help, which is part of my meager and possibly wrong understanding of this thread, is there any other way to speed up the computation other than "throwing" more cores and RAM at an exponent? I keep wondering if there's a way to actually make calculations themselves process faster with current settings, but I am thinking there's probably not one other than the one I mentioned.
Jwb52z is offline   Reply With Quote
Old 2011-10-17, 06:28   #7
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

3·29·83 Posts
Default

Errr... axn says
Quote:
Now, onto the computation word size. Native 128-bit integer and floating point data types can, in theory, speed up TF and LL (resp).
I believe that's what you mean by "128 bit processing". That means 128 bit processing would help, but axn says it's also not being implemented.

This is different from what the OP was asking about, which is 128-bit memory address sizes. This is not going to happen for a long time (though knowing computers that might be only ~50 years). 64-bit addressing, at its max (which we're not at) allows 16 EB = 16,000,000 TB of memory to be addressed; my computer has more-than-average memory, at 12 GB = .012 TB. therefore, 128-bit OSs and processor architectures will not be around anytime in the next half a century (at least).

Last fiddled with by Dubslow on 2011-10-17 at 06:29
Dubslow is offline   Reply With Quote
Old 2011-10-17, 06:57   #8
LaurV
Romulan Interpreter
 
LaurV's Avatar
 
Jun 2011
Thailand

213748 Posts
Default

I don't believe OP was asking about memory width. I took it like CPU register size, and since my previous post. It is not uncommon to have X bits external buss and n*X bits internal processing power, and the best example is I8088. When Intel did 8086 (like a full 16 bits toy) it founds out than the new cpu is not really compatible with all existent buses and the people out there want compatibility first of all. I would by a better cpu if I can substitute my old CPU without needing to throw away half of the things in the case. So they downgraded 8086 to 8088, which had 8 bit external bus.

Maybe we will never go to more then 64 bit addressing space, as said before, this is more then enough for how much memory one can have in its box, but "probably" we will go to higher bit count to increase the speed (see RAID drives, VGA cards with 256 bit bus for memory already in production, etc), and "for sure" we will see 128 bit (and more!) internal registers in the future. Integer 128 ALU are already implemented in some industrial systems (see Vipa and Siemens processors, they use Virtex, we had the opportunity to manufacture some cards for them).

Again, it is not uncommon to have a narrower external bus and a wider internal ALU. I8088 is the best example.

The question is if this would speed up searching for huge primes. My answer stands: with the current algorithms, most probably not, or not so much. Having a 3x or 4x speed for TF means nothing, you could do one or two bits more. The real gain would come from discovering new algorithms, as somebody said on a parallel thread.

Last fiddled with by LaurV on 2011-10-17 at 06:59
LaurV is offline   Reply With Quote
Old 2011-10-17, 07:17   #9
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

160658 Posts
Default

Well, even from a memory standpoint, I wouldn't be surprised if it became necessary sometime way off in the future. I'm sure someone in the late 40's/early 50's said "we'll never ever get anywhere close to 4 GB of memory, 32 bits is more than enough" and look at where we are now.
Dubslow is offline   Reply With Quote
Old 2011-10-17, 07:35   #10
axn
 
axn's Avatar
 
Jun 2003

479010 Posts
Default

Quote:
Originally Posted by LaurV View Post
The real gain would come from discovering new algorithms, as somebody said on a parallel thread.
That one was for factoring. Primality proving is a different kettle of fish. There are good reasons to suspect that LL test will be _the_ fastest test possible for a Mersenne prime.

Quote:
Originally Posted by LaurV View Post
The question is if this would speed up searching for huge primes. My answer stands: with the current algorithms, most probably not, or not so much. Having a 3x or 4x speed for TF means nothing, you could do one or two bits more.
128-bit floating point implemented in hardware could speed up LL testing by 2x. Even implemented in microcode, it could potentially speed it up by 25%.
axn is offline   Reply With Quote
Old 2011-10-17, 09:13   #11
davieddy
 
davieddy's Avatar
 
"Lucan"
Dec 2006
England

11001010010102 Posts
Default PC History

Quote:
Originally Posted by Dubslow View Post
Well, even from a memory standpoint, I wouldn't be surprised if it became necessary sometime way off in the future. I'm sure someone in the late 40's/early 50's said "we'll never ever get anywhere close to 4 GB of memory, 32 bits is more than enough" and look at where we are now.
In 1980, 4 MHz and 64 KB were typical. Do you know about "segments"
on Intel machines, used to address 1MB by augmenting 16 bits to 20?

As for 1950 and 32 bits, Turing missed finding M521 because he only
had 1024 bits of memory!


David
davieddy is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
GIMPS on PS3 flouran Hardware 202 2010-04-30 09:06
GIMPS Nub SayMoi Information & Answers 5 2009-04-06 15:29
GIMPS uses only 1 cpu Unregistered Information & Answers 7 2009-01-10 20:01
GIMPS should pay Vijay Lounge 40 2005-07-01 18:10
Why do you run GIMPS ? Prime Monster Lounge 12 2003-11-25 19:04

All times are UTC. The time now is 10:45.

Wed Dec 2 10:45:03 UTC 2020 up 83 days, 7:56, 1 user, load averages: 2.60, 2.32, 2.18

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.