mersenneforum.org  

Go Back   mersenneforum.org > Extra Stuff > Miscellaneous Math

Reply
 
Thread Tools
Old 2012-08-21, 14:53   #12
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

160658 Posts
Default

Well since no one has found anything wrong (so far), I went ahead and wrote an implementation. It's written to read Msieve matrix files and write Msieve dependency files.

As such, I have not tested it yet, though once I get compiled an Msieve that stops QS before Lanczos, I'll try it and see what happens. It does at least compile with the following warnings:
Code:
~/bin/c∰∂ gcc -o linsolv -O3 randlinsolv.c -Wall
In file included from randlinsolv.c:4:0:
util.h: In function ‘read_clock’:
util.h:230:2: warning: implicit declaration of function ‘gettimeofday’
randlinsolv.c: In function ‘read_matrix’:
randlinsolv.c:146:2: warning: function returns address of local variable
randlinsolv.c: In function ‘vector_init’:
randlinsolv.c:167:2: warning: function returns address of local variable
randlinsolv.c: In function ‘set_init’:
randlinsolv.c:179:2: warning: function returns address of local variable
randlinsolv.c: In function ‘read_matrix’:
randlinsolv.c:81:7: warning: ignoring return value of ‘fread’, declared with attribute warn_unused_result
randlinsolv.c:82:7: warning: ignoring return value of ‘fread’, declared with attribute warn_unused_result
randlinsolv.c:83:7: warning: ignoring return value of ‘fread’, declared with attribute warn_unused_result
randlinsolv.c:94:8: warning: ignoring return value of ‘fread’, declared with attribute warn_unused_result
randlinsolv.c:101:9: warning: ignoring return value of ‘fread’, declared with attribute warn_unused_result
randlinsolv.c:114:9: warning: ignoring return value of ‘fread’, declared with attribute warn_unused_result
randlinsolv.c:128:8: warning: ignoring return value of ‘fread’, declared with attribute warn_unused_result
randlinsolv.c: In function ‘get_random_seeds’:
util.h:245:16: warning: ignoring return value of ‘fread’, declared with attribute warn_unused_result
util.h:246:23: warning: ignoring return value of ‘fread’, declared with attribute warn_unused_result
I'm slightly worried about the local address thing, but I don't think that's a problem?
The util.h file is almost the same as the one in the Msieve repository, except I included read_clock() from util.c and get_random_seeds() from demo.c.


It could really use some looking-at by experienced programmers, both to verify the math and to catch silly mistakes. (I tried to include <stdio.c> and open files in 'rb' mode.) In particular, jasonp, could you look at the I/O code and see if it jibes with the Msieve file formats? I'm pretty sure it's right, but then...
Dubslow is offline   Reply With Quote
Old 2012-08-21, 16:35   #13
jasonp
Tribal Bullet
 
jasonp's Avatar
 
Oct 2004

3,541 Posts
Default

Quote:
Originally Posted by Dubslow View Post
It could really use some looking-at by experienced programmers, both to verify the math and to catch silly mistakes. (I tried to include <stdio.c> and open files in 'rb' mode.) In particular, jasonp, could you look at the I/O code and see if it jibes with the Msieve file formats? I'm pretty sure it's right, but then...
It looks okay on a quick first pass, but might I suggest you not worry about packing everything into minimum memory for your first implementation? If you want to get a feel for how a new algorithm behaves, you should not throw a large initial problem at it. (Even a small NFS problem will generate a matrix that would take forever to solve via an algorithm whose complexity is bad enough).

You can make up a random problem of size 1000, where each column contains nonzeros drawn from a distribution mimicking the distribution of primes, or you can modify functions near the bottom of common/lanczos/lanczos_io.c to read and write matrices. An O(n^3) algorithm will have difficulty even with sparse matrices of that size.

Last fiddled with by jasonp on 2012-08-21 at 16:54 Reason: remove irrelevant comment
jasonp is offline   Reply With Quote
Old 2012-08-21, 16:42   #14
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

3×29×83 Posts
Default

Quote:
Originally Posted by jasonp View Post
It looks okay on a quick first pass, but might I suggest you not worry about packing everything into minimum memory for your first implementation? If you want to get a feel for how a new algorithm behaves, you should not throw a large initial problem at it. (Msieve will only write out matrix checkpoint files for linear systems over size 1M, but even a small NFS problem will generate a matrix that would take forever to solve via an algorithm whose complexity is bad enough).
It's still pretty bad. Although I figured out that QS doesn't write a matrix anywhere, lucky for me I caught a C99 entering post-processing a few minutes ago. Unfortunately, the initial allocation for S0 requires ~830 GB of memory.
Code:
~/yafu∰∂ linsolv nfs.dat.mat nfs.dep
Welcome to derptown!
Attempting to read matrix from nfs.dat.mat
Matrix is 256941 x 257124 with 83 dense rows
Calloc 1
Calloc 3
Matrix is 256941 x 257124 with 83 dense rows
Estimated memory for the initial set is 832707 MB (N is 103620972, words per vector is 8036)
^C
Quote:
Originally Posted by jasonp View Post
You can make up a random problem of size 1000, where each column contains nonzeros drawn from a distribution mimicking the distribution of primes, or you can modify functions near the bottom of common/lanczos/lanczos_io.c to read and write matrices. An O(n^3) algorithm will have difficulty even with sparse matrices of that size.
So BL and BW are < O(n3)? And it sounds like a lot of trouble to go to to make a problem... Edit: Of course only now do I see Batalov's edits that BL really is O(n2)...

Last fiddled with by Dubslow on 2012-08-21 at 16:50
Dubslow is offline   Reply With Quote
Old 2012-08-21, 16:50   #15
jasonp
Tribal Bullet
 
jasonp's Avatar
 
Oct 2004

1101110101012 Posts
Default

Gauss elimination is O(n^3) for a dense matrix. For sparse matrices BL and BW perform O(n) matrix-vector products, each of which has complexity O(kn) for k nonzeros per matrix column. If the matrix is dense the runtime is the same as GE, but for sparse matrices this is a real savings, and the block versions of the algorithms reduce the number of matrix multiplies to O(n/B) for block size B.

Regarding making up your own problems, is it less work to hack something simple together of your own, or to take a huge collection of somebody else's stuff and make it do something it wasn't designed to do? I have to answer that question almost every day, so it's good practice for you to grapple with it.

Last fiddled with by jasonp on 2012-08-21 at 16:54
jasonp is offline   Reply With Quote
Old 2012-08-21, 17:48   #16
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

11100001101012 Posts
Default

I am so talented. I managed to get an all-integer program to throw a floating point exception. (And this after reducing N to n/2 to keep the memory under control.)
Code:
 linsolv nfs.dat.mat nfs.dep
Welcome to derptown!
Attempting to read matrix from nfs.dat.mat
Matrix is 256941 x 257124 with 83 dense rows
Calloc 1
Calloc 3
Matrix is 256941 x 257124 with 83 dense rows
Estimated memory for the initial set is 4132 MB (N is 128562, words per vector is 8036)
Initial set constructed
Doing the 0th dense row...
Doing the 1th dense row...
Floating point exception
Dubslow is offline   Reply With Quote
Old 2012-08-21, 17:55   #17
xilman
Bamboozled!
 
xilman's Avatar
 
"𒉺𒌌𒇷𒆷𒀭"
May 2003
Down not across

10,753 Posts
Default

Quote:
Originally Posted by Dubslow View Post
I am so talented. I managed to get an all-integer program to throw a floating point exception. (And this after reducing N to n/2 to keep the memory under control.)
Code:
 linsolv nfs.dat.mat nfs.dep
Welcome to derptown!
Attempting to read matrix from nfs.dat.mat
Matrix is 256941 x 257124 with 83 dense rows
Calloc 1
Calloc 3
Matrix is 256941 x 257124 with 83 dense rows
Estimated memory for the initial set is 4132 MB (N is 128562, words per vector is 8036)
Initial set constructed
Doing the 0th dense row...
Doing the 1th dense row...
Floating point exception
It's generally quite easy to get a FPE. There are relatively few signals available and most of them (if not all) serve multiple purposes. In particular a FPE is often used to signal division by zero, whether or not floating point arithmetic is being used.

Paul
xilman is offline   Reply With Quote
Old 2012-08-23, 15:41   #18
only_human
 
only_human's Avatar
 
"Gang aft agley"
Sep 2002

2×1,877 Posts
Default

Here is an implementation: A Randomized Algorithm for Linear Equations over Prime Fields in Clojure

Wikipedia had a page up but:
Quote:
This page has been deleted. The deletion and move log for the page are provided below for reference.
00:23, 22 August 2012 Explicit (talk | contribs) deleted page Raghavendra's algorithm for solving linear equations (Expired PROD, concern was: Violation of WP:V the only sources are an unpublished paper and a blog entry. While this might be a great result and I respect Tao's comments in the blog, it has not yet been vetted through peer review and so ...)
This is a copy from Google's cache:
Quote:
Raghavendra's algorithm for solving linear equations

Raghavendra's algorithm[1] is a randomized algorithm for solving a system of linear equations, discovered in 2012 by Prasad Raghavendra while studying constraint satisfaction problems. For sparse systems it has a similar running time to earlier methods.[2] Randomization is used so as to use as little algebraic structure as possible, apart from the fact that there is an operation that preserves the solution space. Terence Tao observed that a non-random version of the algorithm which does use the algebraic structure has comparable resource requirements. The algorithm was initially presented over finite fields of prime order but a characteristic zero version should be possible.

See also
  • Lattice problem
  • Epsilon-biased sample space
  • Genetic algorithm
  • Differential evolution
  • Krylov subspace

References

1 A Randomized Algorithm for Linear Equations over Prime Fields, Prasad Raghavendra, University of California, Berkeley, CA. August 7, 2012
2 Solving Sparse Linear Equations Over Finite Fields, DOUGLAS H. WIEDEMANN, IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. IT-32, NO. 1, JANUARY 1986

External links

"A New Way To Solve Linear Equations", Gödel’s Lost Letter and P=NP, August 9, 2012
only_human is offline   Reply With Quote
Old 2012-08-23, 17:40   #19
Kathegetes
 
Kathegetes's Avatar
 
Jul 2012
Paris, France.

1438 Posts
Default

Quote:
Originally Posted by flashjh View Post
Nice...
I saw this abacus adventure in a movie. Feynman is one of my heroes. He can play bongos for me in the future and I can do his math while moving my top and bottom, strange beauty and charm up and down.
Math is funny stuff. I am called a polymath. I wonder if I even like math. The shapes and numbers do like me.
Here is what I think and want to prove. Hens seem to be able to count. Does anyone know how or where to apply for some grant money. It shouldn't be expensive. I think today all the nano-tech is available to conclude. Ducks suck at linear equations.
Kathegetes is offline   Reply With Quote
Old 2012-08-23, 18:31   #20
Batalov
 
Batalov's Avatar
 
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2

100101000001012 Posts
Default

The difference between a math and a polymath could be as large as the difference between a mouse and a titmouse.
Batalov is offline   Reply With Quote
Old 2012-08-23, 23:18   #21
ewmayer
2ω=0
 
ewmayer's Avatar
 
Sep 2002
República de California

103×113 Posts
Default

Quote:
Originally Posted by Kathegetes View Post
Ducks suck at linear equations.
But it is said that they are "quack aerodynamicists".

Quote:
Originally Posted by Batalov View Post
The difference between a math and a polymath could be as large as the difference between a mouse and a titmouse.
...Or that betwixt a nuthatch and a booby hatch.
ewmayer is online now   Reply With Quote
Old 2012-08-24, 00:35   #22
Batalov
 
Batalov's Avatar
 
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2

36×13 Posts
Default

Quote:
Originally Posted by ewmayer View Post
...ducks...
...titmouse...
...nuthatch and a blue-footed booby hatch.
The closing credits to the The Big Year were very helpful to quickly learn 755 different birds ...Not!
-- each of the birds' images was flashed at 5 frames per second. Surely not for the people with neurological afflictions!!
Batalov is offline   Reply With Quote
Reply



Similar Threads
Thread Thread Starter Forum Replies Last Post
Solving systems of equations modulo n carpetpool carpetpool 2 2017-02-05 20:40
SIQS - problem with solving linear algebra Dome Factoring 14 2015-03-06 17:59
Solving linear systems faster than ever... WraithX Math 2 2010-10-23 21:27
Solving linear systems modulo n drido Math 3 2008-02-08 15:06
Mathematica question-solving systems Zeta-Flux Math 6 2005-09-22 21:47

All times are UTC. The time now is 01:43.


Sat Jul 17 01:43:45 UTC 2021 up 49 days, 23:31, 1 user, load averages: 1.54, 1.34, 1.27

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.