![]() |
[QUOTE=richs;494557]I ran remdups (there were about 20% dups and over 5k bad relations) and was able to get to full merge with TD= 70 where my WIndows 10 laptop with 8Gb memory locked up again due to memory limitations. I then tried TD=60 with Windows booted in safe mode. Then made it to LA with an ETA of 550 hours (ugh!) which is now underway (and I will complete).[/QUOTE]
Actually safe mode and TD=100 (or 90) might be a better choice, if you want to spend the time to redo the filtering. |
I looked around the 14e queue for another job but
1) several jobs are already spoken for. 2) available jobs need more relations. 3) queue needs more jobs added. |
[QUOTE=swellman;494496][url]http://www.mersenneforum.org/showpost.php?p=357756&postcount=972[/url]
It’s the 32-bit version but it’s stable and running only takes a few minutes.[/QUOTE] I should be able to compile a 64bit version useing MSYS2/mingw64. I had no idea people were still using that old compile of mine (which I compiled back then with no idea what I was doing). |
[QUOTE=richs;494557]I took this number thinking it was a 30 bit job based on the listing in NFS@home, but I certainly will check the poly in the future.
I ran remdups (there were about 20% dups and over 5k bad relations) and was able to get to full merge with TD= 70 where my WIndows 10 laptop with 8Gb memory locked up again due to memory limitations. I then tried TD=60 with Windows booted in safe mode. Then made it to LA with an ETA of 550 hours (ugh!) which is now underway (and I will complete). Bottom line is that I have to stick to real 30 bit jobs in the future.[/QUOTE] Glad you got it to work. Unfortunately 30-bit jobs on NFS@Home are rare these days but not completely unknown. I may be posting a couple more soon. If your machine (and wallet) will support it, adding memory to achieve say 16 Gb is very freeing. But then it’s a slippery slope - just think what could be done with 64 Gb... |
[QUOTE=swellman;494500]Yes, the poly is actually a 31-bit job. It is mislabeled [url=https://escatter11.fullerton.edu/nfs/crunching.php]here[/url].
This job may lock your machine due to memory limitations. But using remdups may allow you to sneak it through, as well as lower target_density. Sounds like a fun experiment.[/QUOTE] [QUOTE=RichD;494558]Actually safe mode and TD=100 (or 90) might be a better choice, if you want to spend the time to redo the filtering.[/QUOTE] I did the filtering again with TD=90 and the LA time estimate improved from 550 to 473 hours. I'm going let it continue now. |
[QUOTE=richs;494577]I did the filtering again with TD=90 and the LA time estimate improved from 550 to 473 hours. I'm going let it continue now.[/QUOTE]
The LA locked up at 1.7% complete with 100% of memory in use, so I'm throwing in the towel. 8 Gb of memory isn't enough. Therefore, Unreserving C159_263_107 |
[QUOTE=richs;494632]The LA locked up at 1.7% complete with 100% of memory in use, so I'm throwing in the towel. 8 Gb of memory isn't enough. Therefore,
Unreserving C159_263_107[/QUOTE] What size (dimension) was the matrix? |
[QUOTE=VBCurtis;494634]What size (dimension) was the matrix?[/QUOTE]
16.26M |
Reserving C210_35111_47 (I checked, it's a 30 bit job).
|
I've reserved it for you.
Note that the interpolation code suggests an upper bound of 72M, which is coherent with the number of raw relations displayed in the crunching page being quite low. You'd probably want me to queue the 60M-72M range, wouldn't you ? |
All the unassigned ones need a little touch up. Anywhere from just a couple to 10-20M more relations. Though they all can probably build a minimal matrix, I think it would help the post-processors if a higher target density can be used. Since there is nothing else in the queue at this time.
|
| All times are UTC. The time now is 08:25. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.