 mersenneforum.org Mersenne "like" processing
 User Name Remember Me? Password
 Register FAQ Search Today's Posts Mark Forums Read 2014-05-29, 11:36 #1 ricardofdg   May 2014 216 Posts Mersenne "like" processing Sorry for the lack of background knowledge... I have few questions that I would really appreciate some help. A - Is a prime in the form 2^n-c | c > 1 and 1 = c mod 2 verifiable through same methods that Mersenne numbers are? B - Can I speed up PRP verification of the formula above with programs like Cudalucas for L.Lemer Tests or Equivalent? (maybe not Cudalucas one time that I can't define constants c != 1) C - Are PRP tests processing time in general linear to bitlength? I got average time oscilating from 500ms to 300ms per bit (LLR) but if this really goes linear, i won't be able to finish the job before several months.   2014-05-29, 14:00 #2 ricardofdg   May 2014 2 Posts On "Can I speed up PRP verification" and "Are PRP tests processing time" I meant to say Probable Prime.   2014-05-29, 14:48   #3
Mini-Geek
Account Deleted

"Tim Sorbera"
Aug 2006
San Antonio, TX USA

10000101101102 Posts Quote:
 Originally Posted by ricardofdg A - Is a prime in the form 2^n-c | c > 1 and 1 = c mod 2 verifiable through same methods that Mersenne numbers are?
Only Mersenne numbers can be tested by the LL test. However, a PRP test that's pretty similar on the surface (i.e. run some iterations, each taking about as long as each other, with the number of iterations about the bit length of the number you're testing; I think the algorithm in question is Miller-Rabin) can be run on numbers like this by various programs (LLR, PFGW, and Prime95 could all do it IIRC). Note that the result of this (for a prime number) will only be that it's probably prime (or, for a composite, that it's certainly composite). A certain verification will usually take something like ECPP. Based on your estimate of months for a PRP test, (and assuming no special form, like a known factorization of N-1 or N+1, where N is the number you're testing) there's no way to prove such a number prime right now.

Quote:
 Originally Posted by ricardofdg B - Can I speed up PRP verification of the formula above with programs like Cudalucas for L.Lemer Tests or Equivalent? (maybe not Cudalucas one time that I can't define constants c != 1)
There is a version of LLR for CUDA. You can get it in the packages at http://www.primegrid.com/forum_thread.php?id=1215 If you have a fast CPU, however, it'd probably take even longer to do it on your GPU. (GPUs excel at TFing; not so much at LL/LLR)

Quote:
 Originally Posted by ricardofdg C - Are PRP tests processing time in general linear to bitlength? I got average time oscilating from 500ms to 300ms per bit (LLR) but if this really goes linear, i won't be able to finish the job before several months.
It is similar to Mersenne numbers: if you double the n, you roughly double the time per iteration and the number of iterations, so it's proportional to bitlength^2 (quadratic), not bitlength^1 (linear). The time per iterations is constant for the duration of your test. E.g. if you need to do 30 million iterations, it will take around (30 million * 300 ms =) 15 weeks.

Last fiddled with by Mini-Geek on 2014-05-29 at 14:51   2014-05-30, 11:48   #4
R.D. Silverman

"Bob Silverman"
Nov 2003
North of Boston

22·1,871 Posts Quote:
 Originally Posted by Mini-Geek It is similar to Mersenne numbers: if you double the n, you roughly double the time per iteration
This is false, in general, although it may be true for some specific packages
and some specific numbers. To achieve roughly linear time per iteration,
the modular multiplications must be done by convolution methods that
automatically do the modular reduction (such as DWT), or which allow
the reduction to be done in linear time. (which is true for 2^n +/- c for
small c).

Generic PRP tests run in time O(log^3 N) where N is the number being tested   2014-05-30, 21:52   #5
ewmayer
2ω=0

Sep 2002
República de California

34·5·29 Posts Quote:
 Originally Posted by R.D. Silverman This is false, in general, although it may be true for some specific packages and some specific numbers. To achieve roughly linear time per iteration, the modular multiplications must be done by convolution methods that automatically do the modular reduction (such as DWT), or which allow the reduction to be done in linear time. (which is true for 2^n +/- c for small c). Generic PRP tests run in time O(log^3 N) where N is the number being tested
One can use the same convolution-based-modmul approach for generic moduli - the difference lies in whether one must zero-pad the inputs (==> 2x the transform length) and whether the modular reduction can done be done is some convenient fashion, or is "hard". Even in the latter worst case, one precomputes the inverse of the (fixed) modulus and thereby turns divide into another convo-based multiply. Thus the asymptotics are the same, but the constant of proportionality can vary by roughly a factor of 10 across the hardness-of-modmul spectrum. (DWT-based implicit-mod at the low end, divide-via-inverse-mul-based mod at the high.)   2014-06-02, 11:19   #6
R.D. Silverman

"Bob Silverman"
Nov 2003
North of Boston

22·1,871 Posts Quote:
 Originally Posted by ewmayer One can use the same convolution-based-modmul approach for generic moduli - the difference lies in whether one must zero-pad the inputs (==> 2x the transform length) and whether the modular reduction can done be done is some convenient fashion, or is "hard". Even in the latter worst case, one precomputes the inverse of the (fixed) modulus and thereby turns divide into another convo-based multiply. Thus the asymptotics are the same, but the constant of proportionality can vary by roughly a factor of 10 across the hardness-of-modmul spectrum. (DWT-based implicit-mod at the low end, divide-via-inverse-mul-based mod at the high.)
Yes. In fact, I implemented a 'division by convolution' method when I
wrote "Parallel Polynomial Arithmetic over Finite Rings" back in the 90's.

Aho, Hopcroft & Ullman's book on Algorithms gives a terrific exposition.
[A terrific book in general; I learned a lot from it]

However, it is horribly slow when compared to multiplication. AT least
my implementation was. [perhaps a lousy implementation??]   2014-06-07, 14:29 #7 jasonp Tribal Bullet   Oct 2004 67358 Posts For N-bit operands, N-bit general modular multiplication needs about three N*N->2N-bit multiplies if a generalized reciprocal is already computed; see the functions ap_recip and ap_mul at the bottom of here. This was used with FHT multiplies in Msieve's NFS square root until I switched to using GMP.   2014-06-07, 23:18   #8
R.D. Silverman

"Bob Silverman"
Nov 2003
North of Boston

1D3C16 Posts Quote:
 Originally Posted by jasonp For N-bit operands, N-bit general modular multiplication needs about three N*N->2N-bit multiplies if a generalized reciprocal is already computed; see the functions ap_recip and ap_mul at the bottom of here. This was used with FHT multiplies in Msieve's NFS square root until I switched to using GMP.
This is fine in applications where the modulus is fixed.

Computing N mod p_i for i=0,1,2,...... requires computing a
new inverse each and every time.  Thread Tools Show Printable Version Email this Page Similar Threads Thread Thread Starter Forum Replies Last Post preda GpuOwl 2793 2022-10-07 05:01 wildrabbitt Miscellaneous Math 11 2015-03-06 08:17 cheesehead Math 6 2009-12-15 17:45 nitai1999 Software 7 2004-08-26 18:12 antiroach Lone Mersenne Hunters 6 2003-07-16 23:35

All times are UTC. The time now is 08:50.

Fri Oct 7 08:50:57 UTC 2022 up 50 days, 6:19, 0 users, load averages: 1.16, 1.12, 1.09

Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.

≠ ± ∓ ÷ × · − √ ‰ ⊗ ⊕ ⊖ ⊘ ⊙ ≤ ≥ ≦ ≧ ≨ ≩ ≺ ≻ ≼ ≽ ⊏ ⊐ ⊑ ⊒ ² ³ °
∠ ∟ ° ≅ ~ ‖ ⟂ ⫛
≡ ≜ ≈ ∝ ∞ ≪ ≫ ⌊⌋ ⌈⌉ ∘ ∏ ∐ ∑ ∧ ∨ ∩ ∪ ⨀ ⊕ ⊗ 𝖕 𝖖 𝖗 ⊲ ⊳
∅ ∖ ∁ ↦ ↣ ∩ ∪ ⊆ ⊂ ⊄ ⊊ ⊇ ⊃ ⊅ ⊋ ⊖ ∈ ∉ ∋ ∌ ℕ ℤ ℚ ℝ ℂ ℵ ℶ ℷ ℸ 𝓟
¬ ∨ ∧ ⊕ → ← ⇒ ⇐ ⇔ ∀ ∃ ∄ ∴ ∵ ⊤ ⊥ ⊢ ⊨ ⫤ ⊣ … ⋯ ⋮ ⋰ ⋱
∫ ∬ ∭ ∮ ∯ ∰ ∇ ∆ δ ∂ ℱ ℒ ℓ
𝛢𝛼 𝛣𝛽 𝛤𝛾 𝛥𝛿 𝛦𝜀𝜖 𝛧𝜁 𝛨𝜂 𝛩𝜃𝜗 𝛪𝜄 𝛫𝜅 𝛬𝜆 𝛭𝜇 𝛮𝜈 𝛯𝜉 𝛰𝜊 𝛱𝜋 𝛲𝜌 𝛴𝜎𝜍 𝛵𝜏 𝛶𝜐 𝛷𝜙𝜑 𝛸𝜒 𝛹𝜓 𝛺𝜔