mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > Software

Reply
 
Thread Tools
Old 2021-05-21, 20:03   #1
drkirkby
 
"David Kirkby"
Jan 2021
Althorne, Essex, UK

44810 Posts
Default Optimal bounds on B1/B2 factoring and Max mem utilization.

Looking at the active assignments on GIMPS, it's clear different people have different

interests, so do different worktypes. Some do mainly PRP tests, others do mainly factorisation etc. There are many worktypes to fit different peoples interest.

My own personal interest is in trying to find the next Mersenne Prime. I don't care how I do this - whether its a P-1 factoring followed by a PRP test, or just do a PRP test.

If ones interest is only trying to find the next Mersenne Prime, and not in any factor that may be found by P-1 factorisation, is it counter-productive to give a lot of RAM to mprime, so it uses different bounds for B1 and B2 during P-1 factoring? I believe the larger B2 has more chance of finding a factor, so avoiding a long PRP test, but I suspect it also uses more CPU time.

The page describing the mathematics of GIMPS, and in particular P-1 factoring.

https://www.mersenne.org/various/math.php#p-1_factoring
says

So how does GIMPS intelligently choose B1 and B2? We use a variation of the formula used in trial factoring. We must maximize:
chance_of_finding_factor * primality_test_cost - factoring_cost

Is that exactly what is done in the very latest code by default - particularly if you give mprime hundreds of GB of RAM?

There's a calculator at

https://www.mersenne.ca/prob.php
with a link right at the bottom of the page for more information after you click "Calculate" once. Reading that is interesting, as it says the minimum bounds on B1 and B2, assume no residual value in the any factor found.
  • min: bounds such that the amount of work saved minus the amount of work invested is maximized (delta-benefit == delta-cost)
  • max: bounds at which running the (P-1, PRP) combo costs just as much as running the PRP alone (benefit==cost)
  • mid: optimal midpoint between "min" and "max"
But I know the P-1 factoring code has changed significantly recently, so I'm not sure how accurate some of this older information might be.

I am wondering if I am shooting myself in the foot, as we say in England, by letting mprime use several hundred GB of RAM during P-1 factoring.

Can I use the calculator at

https://www.mersenne.ca/prob.php
to chose optimal bounds on B1 and B2 for finding a prime, ignoring any value in knowing a factor? If so, can I tweak the line in worktodo.txt to use those optimal settings?

Dave

Last fiddled with by drkirkby on 2021-05-21 at 20:04 Reason: Remove those annoying spaces I forgot to in preview
drkirkby is offline   Reply With Quote
Old 2021-05-22, 02:58   #2
axn
 
axn's Avatar
 
Jun 2003

5·29·37 Posts
Default

The calculator on mersenne.ca is not quite up-to-date.

P95 (all versions) calculate optimal B1/B2 that increases overall GIMPS thruput, as explained in the math page.

If larger memory allocation uses larger B1/B2, rest assured, the program has calculated that, in the long run, this will minimize the overall time spent in P-1 + PRP.

I'll reiterate my advice from previous thread: give the program the maximum amount of RAM you can spare and let it do its thing.
axn is online now   Reply With Quote
Old 2021-05-22, 10:35   #3
drkirkby
 
"David Kirkby"
Jan 2021
Althorne, Essex, UK

1C016 Posts
Default

Quote:
Originally Posted by axn View Post
The calculator on mersenne.ca is not quite up-to-date.

P95 (all versions) calculate optimal B1/B2 that increases overall GIMPS thruput, as explained in the math page.

If larger memory allocation uses larger B1/B2, rest assured, the program has calculated that, in the long run, this will minimize the overall time spent in P-1 + PRP.

I'll reiterate my advice from previous thread: give the program the maximum amount of RAM you can spare and let it do its thing.
Thank you. I'll just give it 340 GB or so. I could run into problems if two P-1 tests were run at the same time, as that would exhaust my memory. However, each exponent takes about 2 days to complete, and with two workers I schedule them so one is always around 50% complete when the other starts. So the chances of the two workers both wanting to do P-1 tests at the same time is very remote.

I believe there's an option to set the maximum memory mprime will use. I guess I should set that too, but I doubt it will ever be an issue.

Dave
drkirkby is offline   Reply With Quote
Old 2021-05-22, 11:20   #4
firejuggler
 
firejuggler's Avatar
 
"Vincent"
Apr 2010
Over the rainbow

2×72×29 Posts
Default

If I remember right there is an option to pause stage 2 while you lack memory.
firejuggler is offline   Reply With Quote
Old 2021-05-22, 15:11   #5
petrw1
1976 Toyota Corona years forever!
 
petrw1's Avatar
 
"Wayne"
Nov 2006
Saskatchewan, Canada

120758 Posts
Default From undoc.txt ... there may be other parms of interest

Some options in prime.txt can be configured to have different values
at different times of the day using this syntax:
Option=setting
where setting is defined as
value OR value during list-of-times else setting
At present, only Memory, MaxHighMemWorkers, PauseWhileRunning, LowMemWhileRunning,
and PauseCheckInterval support this during/else syntax. Also note you can
no longer edit these options from the user interface. To use this feature,
you must manually edit the prime.txt/local.txt file. An example in local.txt:
Memory=500 during 1-5/17:30-24:00,1-5/0:00-8:30,6-7/0:00-24:00 else 200
The 1-5 and 6-7 refer to days of the week, with Monday=1 and Sunday=7. The
time portion refers to the hours of the day based on a 24-hour clock.
You do not need to specify days of the week (e.g. 1-7/0:00-8:00
is the same as 0:00-8:00). The above example lets the program use 500MB
during the week from 5:30PM to 8:30AM and all day on weekends. Otherwise
(weekdays from 8:30AM to 5:30PM), the program can use only 200MB.

In rare cases, users have reported the program can interfere with the
performance of some programs such as disk defragmenters and some games.
You can pause the program automatically when these programs are running by
adding this line to prime.txt:
PauseWhileRunning=prog1[n1],prog2[n2],etc
The [n1], [n2] values are optional and indicate the number of worker threads
to pause when prog1 and prog2 are running. The default value for n1 and n2
is to pause all worker threads. Note that the program will pause if the program
name matches any part of the running program's file name. That is "foobar"
will match "c:\foobar.exe", "C:\FOOBAR\name.exe", and even "C:\myfoobarprog.exe".
Also, if prog1 is "*" the program will pause no matter what. Examples:
PauseWhileRunning=*[1] during 6-7/2:00-3:00
PauseWhileRunning=* during 23:00-24:00 else decomp[1],mygame[2]
The first example pauses one worker thread on Saturday and Sunday between
2AM and 3AM. The second example pauses all workers between 11PM and 12AM and
pauses 1 worker if decomp is running and 2 if mygame is running.

LowMemWhileRunning is similar to PauseWhileRunning. This option does not
allow workers to use a lot of memory. This example in prime.txt will make
sure the program is using the minimum amount of memory possible while
photoshop is running:
LowMemWhileRunning=Photoshop

Since P-1 stage 2 runs faster with more memory available you can have the
program only run stage 2 at night when more memory is available. In
prime.txt set:
OnlyRunStage2WithMaxMemory=1

The Memory=n setting in local.txt refers to the total amount of memory the
program can use. You can also put this in the [Worker #n] section to place
a maximum amount of memory that one particular worker can use.

You can set MaxHighMemWorkers=n in local.txt. This tells the program how
wany workers are allowed to use lots of memory. This occurs doing stage 2
of P-1 or ECM on medium-to-large numbers. Default is available memory / 1GB.

You can set a threshold for what is considered lots of memory in MaxHighMemWorkers
calculations. In local.txt, set:
HighMemThreshold=n (default is 50)
The value n is in MB.
petrw1 is offline   Reply With Quote
Old 2021-05-22, 17:16   #6
drkirkby
 
"David Kirkby"
Jan 2021
Althorne, Essex, UK

26·7 Posts
Default

Quote:
Originally Posted by firejuggler View Post
If I remember right there is an option to pause stage 2 while you lack memory.
That could potentially be a useful option, although rather than the worker completely stop, I would rather it progress with 10 GB or so of RAM.

I think there's the option to limit the amount of RAM a worker can use. If I could limit the amount of RAM a worker can use to be 340 GB, and the amount for P-1 factoring to be 350 GB, then if both workers try to do stage-2 at the same time, the second one would only have 10 GB.

With only one workstation having much memory and lots of cores, and each exponent taking around 2 days to complete a task, I can manually keep an eye on what is happening. If I had lots of computers that would not be possible to do, but all the other computers I have access to don't have enough cores to make it worthwhile running more than one worker.
drkirkby is offline   Reply With Quote
Old 2021-05-22, 17:27   #7
drkirkby
 
"David Kirkby"
Jan 2021
Althorne, Essex, UK

26×7 Posts
Default

Quote:
Originally Posted by petrw1 View Post
The Memory=n setting in local.txt refers to the total amount of memory the
program can use. You can also put this in the [Worker #n] section to place
a maximum amount of memory that one particular worker can use.
That is probably the solution to my problem. If I set the maximum RAM the program can use to 350 GB, and specify each worker can use up to 340 GB, that means that any worker would usually have 340 GB available, but might be limited to 10 GB if the other worker is using 340 GB for P-1 factoring.

Since each exponent is taking about 48 hours to test, as long as one exponent is about 50% complete whilst the other is doing its P-1 factoring, both workers should not want to do P-1 factoring at the same time. But that does require me to keep an eye on the progress. But with only one workstation, that is not too difficult.

Dave
drkirkby is offline   Reply With Quote
Old 2021-05-22, 17:41   #8
firejuggler
 
firejuggler's Avatar
 
"Vincent"
Apr 2010
Over the rainbow

2×72×29 Posts
Default

citing 'undoc'
Code:
You can set MaxHighMemWorkers=n in local.txt.  This tells the program how
wany workers are allowed to use lots of memory.  This occurs doing stage 2
of P-1, P+1, or ECM on medium-to-large numbers.  Default is available memory / 1GB.

You can set a threshold for what is considered lots of memory in MaxHighMemWorkers
calculations.  In local.txt, set:
    HighMemThreshold=n        (default is 50)
The value n is in MB.
firejuggler is offline   Reply With Quote
Old 2021-05-25, 16:22   #9
drkirkby
 
"David Kirkby"
Jan 2021
Althorne, Essex, UK

26×7 Posts
Default Why can't I set RAM limit for P-1 factoring higher?

I have a workstation with 384 GB RAM. The maximum memory I can set in mprime for P-1 factoring is 339.799988 GB, I could give mprime 370 GB, as 14 GB is more than enough for the rest of the system most of the time. It's only when running electromagnetic simulation software that I use a lot of RAM. Is the 339.799988 GB the maximum mprime could use, or is the program making a decision based on what it thinks is reasonable for the computer & operating system?
Dave

Last fiddled with by drkirkby on 2021-05-25 at 16:25
drkirkby is offline   Reply With Quote
Old 2021-05-25, 18:12   #10
S485122
 
S485122's Avatar
 
"Jacob"
Sep 2006
Brussels, Belgium

3·5·112 Posts
Default

I suppose the program uses a percentage to compute the minimum quantity of memory not to use. In your system the percentage is too high, but is not an ordinary system memorywise. On 16 GiB system that percentage would reserve about 2 GiB.

Jacob
S485122 is offline   Reply With Quote
Old 2021-05-25, 19:12   #11
drkirkby
 
"David Kirkby"
Jan 2021
Althorne, Essex, UK

26×7 Posts
Default

Quote:
Originally Posted by S485122 View Post
I suppose the program uses a percentage to compute the minimum quantity of memory not to use. In your system the percentage is too high, but is not an ordinary system memorywise. On 16 GiB system that percentage would reserve about 2 GiB.

Jacob
That could be the case, but reserving a fixed percentage is probably not optimal for machines with large amounts of RAM. Also, for many cloud computing services, such as Amazon AWS, where one has no graphical user interface, or overheads of the system, one could probably get away with using almost all the RAM - perhaps leaving 50 MB or maybe even less. I don't know what one could get away with.

Dave
drkirkby is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
How to put the optimal cores for factoring numbers in Pari Droid... rockzur Factoring 0 2020-08-09 04:54
What Bounds to choose, and what are Bounds 144 Information & Answers 5 2017-03-15 13:36
Optimal ECM bounds henryzz GMP-ECM 14 2011-06-09 17:04
Hyperthreading and CPU utilization. drew Hardware 19 2006-04-15 15:46
Question about factoring bounds drew Software 4 2005-11-22 03:23

All times are UTC. The time now is 02:16.


Thu May 26 02:16:05 UTC 2022 up 42 days, 17 mins, 0 users, load averages: 1.84, 1.48, 1.25

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.

≠ ± ∓ ÷ × · − √ ‰ ⊗ ⊕ ⊖ ⊘ ⊙ ≤ ≥ ≦ ≧ ≨ ≩ ≺ ≻ ≼ ≽ ⊏ ⊐ ⊑ ⊒ ² ³ °
∠ ∟ ° ≅ ~ ‖ ⟂ ⫛
≡ ≜ ≈ ∝ ∞ ≪ ≫ ⌊⌋ ⌈⌉ ∘ ∏ ∐ ∑ ∧ ∨ ∩ ∪ ⨀ ⊕ ⊗ 𝖕 𝖖 𝖗 ⊲ ⊳
∅ ∖ ∁ ↦ ↣ ∩ ∪ ⊆ ⊂ ⊄ ⊊ ⊇ ⊃ ⊅ ⊋ ⊖ ∈ ∉ ∋ ∌ ℕ ℤ ℚ ℝ ℂ ℵ ℶ ℷ ℸ 𝓟
¬ ∨ ∧ ⊕ → ← ⇒ ⇐ ⇔ ∀ ∃ ∄ ∴ ∵ ⊤ ⊥ ⊢ ⊨ ⫤ ⊣ … ⋯ ⋮ ⋰ ⋱
∫ ∬ ∭ ∮ ∯ ∰ ∇ ∆ δ ∂ ℱ ℒ ℓ
𝛢𝛼 𝛣𝛽 𝛤𝛾 𝛥𝛿 𝛦𝜀𝜖 𝛧𝜁 𝛨𝜂 𝛩𝜃𝜗 𝛪𝜄 𝛫𝜅 𝛬𝜆 𝛭𝜇 𝛮𝜈 𝛯𝜉 𝛰𝜊 𝛱𝜋 𝛲𝜌 𝛴𝜎𝜍 𝛵𝜏 𝛶𝜐 𝛷𝜙𝜑 𝛸𝜒 𝛹𝜓 𝛺𝜔