mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   GPU to 72 (https://www.mersenneforum.org/forumdisplay.php?f=95)
-   -   GPU to 72 status... (https://www.mersenneforum.org/showthread.php?t=16263)

chalsall 2020-04-10 00:49

[QUOTE=Chuck;542256]On the "Current Trial Factoring Depth for all Candidates" report page, does factoring beyond 77 bits get added in to the 77 column?[/QUOTE]

No. That's one of the reports I need to update.

Edit: Updated. Included the links to the new charts.

bayanne 2020-04-12 08:15

A little bemused that srbase do not seem to be reporting the level of factors that they had once been doing, or have I got it wrong, in some way ...

petrw1 2020-04-12 14:31

[QUOTE=bayanne;542409]A little bemused that srbase do not seem to be reporting the level of factors that they had once been doing, or have I got it wrong, in some way ...[/QUOTE]

He's now factoring to 78 bits. At first it only to 74.

Here is his daily chart.

[url]https://www.gpu72.com/reports/worker/1f51ffc3b3beb59724a343cbfa4f0cd1/[/url]

chalsall 2020-04-12 16:38

[QUOTE=petrw1;542427]He's now factoring to 78 bits. At first it only to 74.[/QUOTE]

Yes. He's /not/ cheating -- this is simply a function of his swarm doing the deeper work, and so fewer factors are found per wall-clock time.

When he starts doing 73 to 74 again (which he's free to continue getting from GPU72) his factors found will increase again.

chalsall 2020-04-12 20:11

Quick update...
 
So, a quick update...

1. Yesterday I moved into production version 0.422 of the GPU72_TF payload. This fixes the bug which caused the cron sub-system to sometimes not be installed, and thus the check-point files for the P-1'ing CPU runs not being thrown back to the server. Thanks to PhilF for pointing out my SPE.

2. The auto submission of results to Primenet from Colab TF'ing runs now correctly populates the Facts table when a factor is found.

2.1. I've written and run a script that back-filled all 454 cases where the Factor was not correctly recorded.

2.2. This means the reports and charts should once again be sane.

Uncwilly 2020-04-12 20:14

[QUOTE=chalsall;542461]2.2. This means the reports and charts should once again be sane.[/QUOTE]
:tu:

Uncwilly 2020-04-12 20:56

Does this look ok? I have seen something like this a few times recently.
[CODE]Exiting...
20200412_205357 INFO: Comms spider starting...
20200412_205357 INFO: Gracefull shutdown...
cat: worktodo.txt: No such file or directory
Use of uninitialized value $AID in concatenation (.) or string at ./comms.pl line 309.
Done.[/CODE]

chalsall 2020-04-12 21:06

[QUOTE=Uncwilly;542468]Does this look ok? I have seen something like this a few times recently.[/QUOTE]

OK, thanks for the report. It's harmless, although I'll add a conditional to handle it more gracefully.

LaurV 2020-04-13 03:47

Chris, any big trouble to add one or two more columns to [URL="http://www.gpu72.com/reports/workers/"]this table[/URL] ("77" and "78") and rename the last to ">77" or ">78"? If not big trouble, then please? :geek: (but if trouble, then forget it, there are not so many people crunching there anyhow, however, adding some column may "stimulate" them? :big grin:)

chalsall 2020-04-13 17:39

[QUOTE=LaurV;542491]Chris, any big trouble to add one or two more columns to [URL="http://www.gpu72.com/reports/workers/"]this table[/URL] ("77" and "78") and rename the last to ">77" or ">78"?[/QUOTE]

Yeah. That's on my todo list. There's actually quite a bit of back-end work involved with that report, but it needs to be done.

There's also the issue of the web page's width. I run 1920 by 1080 (times three), but not everyone does. I may drop the DC 68and 69 columns, since that work is from years ago, and there are only ~400K such runs. I'll have a "< 71" column instead, which will be an aggregate.

[QUOTE=LaurV;542491]If not big trouble, then please? :geek: (but if trouble, then forget it, there are not so many people crunching there anyhow, however, adding some column may "stimulate" them? :big grin:)[/QUOTE]

Yeah... The "real" GPU72 BOINC system will regularly go up to 78, so it will be worthwhile having that data exposed. :smile:

James Heinrich 2020-04-13 19:10

[QUOTE=chalsall;542543]I run 1920 by 1080 (times three), but not everyone does.[/QUOTE]I run 1280x1080 (x2) (two windows on 2560x1080) and the screen already cuts off halfway through the yellow "Saved" column.

Uncwilly 2020-04-13 19:17

On one machine I am running a 1080 x 1920 on the left and a 1920 x 1080 on the right. Before my current dual screen I was using just the 1920 x 1080 (I do most of my browsing on it still.):razz::cmd:

chalsall 2020-04-19 19:25

Just saw the funniest thing in one of my log files...

[CODE][19/Apr/2020:19:22:34 +0000] gpu72.com 192.71.42.108 - - "GET /robots.txt HTTP/1.1" 200 176 "-" "Go-http-client/1.1"
[19/Apr/2020:19:22:34 +0000] gpu72.com 192.36.53.165 - - "GET /humans.txt HTTP/1.1" 301 240 "-" "Go-http-client/1.1"[/CODE]

Great sense of humor. I wonder if there's an RFC for this. The fetcher has quite a bit of IP space...

James Heinrich 2020-04-19 19:46

[url]https://www.robotstxt.org/[/url]
[url]http://humanstxt.org/[/url]

chalsall 2020-04-19 20:02

[QUOTE=James Heinrich;543193][url]http://humanstxt.org/[/url][/QUOTE]

Cool. Thanks for the knowledge. I'd never heard about this initiative, but I like it. Logical, and potentially useful meta-data. Good place for a copyright et al notice... :smile:

Chuck 2020-04-24 03:48

Is SRBase and BOINC now out of the picture?

Uncwilly 2020-04-24 04:02

[QUOTE=Chuck;543613]Is SRBase and BOINC now out of the picture?[/QUOTE]
SRBase and their BOINC folks are now getting exponents directly from PrimeNet. I recently had a PM exchange with KEP. They are still turning in exponents (you can see this in the recently cleared list). There are other developments that are in the works. I don't want to speak out of school at the moment.

ixfd64 2020-04-24 23:17

I'm disappointed to learn that GPU to 72 no longer supports BOINC due to a service mark dispute. Hopefully this won't mean the end of BOINC integration for GIMPS as a whole.

chalsall 2020-04-24 23:39

[QUOTE=ixfd64;543673]I'm disappointed to learn that GPU to 72 no longer supports BOINC due to a service mark dispute.[/QUOTE]

To be clear, I'm more than happy to continue to support Reb's BOINC efforts. However, GPU72 doesn't have the type of work available which they want (72 to 73 work in the higher ranges, which won't be useful for years...). The specialized API I built for them still exists, if they ever decide they'd like to do deeper work again.

[QUOTE=ixfd64;543673]Hopefully this won't mean the end of BOINC integration for GIMPS as a whole.[/QUOTE]

They're now getting work directly from Primenet. And, the REAL GPU72 BOINC system is in early alpha...

Mark Rose 2020-04-25 00:45

Exciting times in the TF world!

(I've read the locked thread)

chalsall 2020-05-09 01:01

LMH TF -- Back by popular demand!
 
So, it looks like we're going to be OK with regards to TF'ing firepower. So...

By the request of some (but particularly Uncwilly), I have imported some candidates in the 332M range (AKA the 100M digit candidates) for some quality TF'ing time. Thanks to George for expiring a bunch of /really/ old assignments.

For those who want to work there, the [URL="https://www.gpu72.com/account/getassignments/lltf/"]LLTF Assignment Form[/URL] again has the LMH Bit-first and Depth-first options. The former goes to 76 bits, while the later goes to 78.

I'm currently testing to ensure that the Colab code-paths will handle this OK. So, currently, a couple of my instances, and all of Uncwilly's, are doing 332M work. Tomorrow I'll update the Instance Assignment form to let people opt in to do this work type.

Which brings up a question: just how deep to people want to take these? 78 isn't "optimal", but a 77 to 78 run will take about two hours (on a P100). Should I have a Breath first (75), Nominal (78), and Deep (81?).

Feedback appreciated.

kladner 2020-05-09 01:47

The idea of ripping through TF is attractive, but I'm not really drawn to the upper reaches. On the other hand, I am mostly into diddling Colab for all the GHz-d I can get. :smile: I don't care what GPU72 deems necessary as long as it advances the main project. What makes Sense, or Let GPU Decide. Just say.
I will note the saying, "If it's free it's not the product. You are."
But, I just signed off my paid account, with 4 P100s, called up a free account and got 2 T4s. This is approximately equal to 3 P100s. I usually stop this kind of run after 5-6 hours, and line up the paid account to run overnight.
3 free accounts and one paid let's me keep as much running as I have time to set up. I can always find at least one that will run with GPUs.

Uncwilly 2020-05-09 04:55

[QUOTE=chalsall;544925]I'm currently testing to ensure that the Colab code-paths will handle this OK. So, currently, a couple of my instances, and all of Uncwilly's, are doing 332M work.[/QUOTE]I requested a manual one for my integrated GPU. And I have set MISFIT to target that range (I hope I got it right.) With the T4's I have at the moment, 75->76 is taking ~21 minutes per. Hoping to find a factor or 2 (still running ~9% behind expected).

:chalsall:
:clap:
:ttu:

Chuck 2020-05-09 12:30

[QUOTE=kladner;544928]But, I just signed off my paid account, with 4 P100s, called up a free account and got 2 T4s. This is approximately equal to 3 P100s. I usually stop this kind of run after 5-6 hours, and line up the paid account to run overnight.
3 free accounts and one paid let's me keep as much running as I have time to set up. I can always find at least one that will run with GPUs.[/QUOTE]

I also started using my free account with two notebooks after my daily 18 hour limit is reached on the paid account. However, I never get T4s and I don't want to bother with disconnecting and reconnecting in order to get them. Sometimes I even get a slow K80 but it's free...

kladner 2020-05-09 15:23

[QUOTE=Chuck;544949]I also started using my free account with two notebooks after my daily 18 hour limit is reached on the paid account. However, I never get T4s and I don't want to bother with disconnecting and reconnecting in order to get them. Sometimes I even get a slow K80 but it's free...[/QUOTE]
I guess I'm pretty snotty about K80s and P4s, and throw them back. I'll settle for P100s, though I can't forget that they are about 2/3 of a T4 for about 5 times the power consumption. This is a concern only with the free accounts. Paid always runs P100s. Getting the higher cards can take a bit of patience.

Runtime Error 2020-05-09 18:41

The most impatient man in the world?
 
Hi,

I signed up for GPUto72 maybe an hour ago, but I haven't received the activation email yet (checked spam folder). I'm fairly certain that I typed my email correctly. I also tried again with a secondary email and different username, but no luck (and I apologize to the admins for spamming it). It took me to the page where it says an email was sent, and if I try again with the same username it says that the username is taken, so it seems the account was created successfully.

Is the activation email script working? Perhaps I am just being impatient, but I am excited to join the Colab effort. (And I have time this weekend to play with it.) Thank you!!!

Uncwilly 2020-05-09 19:13

[QUOTE=Runtime Error;544967]Hi,

I signed up for GPUto72 maybe an hour ago, but I haven't received the activation email yet (checked spam folder). [/QUOTE]
Did you try logging in?

chalsall 2020-05-09 19:26

[QUOTE=Runtime Error;544967]I signed up for GPUto72 maybe an hour ago, but I haven't received the activation email yet (checked spam folder).[/QUOTE]

Hmmm... I didn't receive an email either; perhaps my provider's email subsystem is having issues today. I'll drill-down.

However, I've just activated your (first) account. Username starts with a capital "C". Please PM me if you have any further issues.

Runtime Error 2020-05-09 19:27

[QUOTE=Uncwilly;544971]Did you try logging in?[/QUOTE]

Yes I did, but I just tried again and it works! I suppose I should have been more patient. Thank you!

(Still no email but I guess that doesn't matter now.)

Runtime Error 2020-05-09 19:29

[QUOTE=chalsall;544973]Hmmm... I didn't receive an email either; perhaps my provider's email subsystem is having issues today. I'll drill-down.

However, I've just activated your (first) account. Username starts with a capital "C". Please PM me if you have any further issues.[/QUOTE]

(I must've been typing my reply above as you penned this)

It works! Thank you for activating it!!!

kladner 2020-05-10 13:49

[QUOTE=Runtime Error;544967]Hi,

I signed up for GPUto72 maybe an hour ago, but I haven't received the activation email yet (checked spam folder). .......[/QUOTE]
There may be a human element involved. Chalsall (who created and runs GPU72) may not have enough coffee to focus, yet. :smile:


OOPS! Time lapse. Belated response.

kriesel 2020-05-10 14:33

[QUOTE=chalsall;544925]Feedback appreciated.[/QUOTE]If TF capacity is now ample, consider catching up on TF level to optimal, on already-primality-tested-once exponents. Both in 100Mdigit territory and elsewhere.

S485122 2020-05-10 21:24

[QUOTE=kriesel;545024]If TF capacity is now ample, consider catching up on TF level to optimal, on already-primality-tested-once exponents. Both in 100Mdigit territory and elsewhere.[/QUOTE]Once one test is done the optima change, they are not the same as for an exponent that had no primality test done.

At the moment the different types of work types are done separately and on different hardware instead of consecutively as was the case a long time ago : a user used to get an assignment that implied first TF, then PM-1 and finally LL, now one gets a specific work type. This means that the optima should be base on the overall throughput of GIMPS in the different work types and not on the individual machine doing the test. Except if one does the three work types on the same exponent with the same hardware, the optimum for card X or Y is not really relevant.

Jacob

kriesel 2020-05-10 22:41

[QUOTE=S485122;545037]Once one test is done the optima change, they are not the same as for an exponent that had no primality test done.

At the moment the different types of work types are done separately and on different hardware instead of consecutively as was the case a long time ago : a user used to get an assignment that implied first TF, then PM-1 and finally LL, now one gets a specific work type. This means that the optima should be base on the overall throughput of GIMPS in the different work types and not on the individual machine doing the test. Except if one does the three work types on the same exponent with the same hardware, the optimum for card X or Y is not really relevant.

Jacob[/QUOTE]Right. One bit less TF is justified, for example, if a primality test has already been done.
The GPU72 target listed at mersenne.ca is 81 bits in the 100Mdigit range for 2 tests saved, but there are many exponents there only up to 77-79 bits. [URL]https://www.mersenne.org/report_factoring_effort/?exp_lo=332192800&exp_hi=334000000&bits_lo=1&bits_hi=80&tfonly=1&tftobits=72[/URL]
Some of which have inadequate or no P-1 done yet. Some of which have a primality test started, or even completed. One example is
[URL]https://www.mersenne.org/report_exponent/?exp_lo=332203309&full=1[/URL]
Runtime Error mentioned this issue to me, and gave an exponent example. [url]https://mersenneforum.org/showpost.php?p=544550&postcount=5[/url]
I easily found a factor with two more bits of TF. Xebecer's 4944. GhzD of PRP was wasted. [URL="https://www.mersenne.org/report_exponent/?exp_lo=332356909&full=1"]https://www.mersenne.org/report_exponent/?exp_lo=332356909&full=1 [/URL]But another 4944. GhzD of PRP DC won't be.

(For those curious about the tradeoff calculations, [URL]https://www.mersenne.org/various/math.php[/URL] is a good start. For more, see
P-1 bounds determination [URL]https://www.mersenneforum.org/showpost.php?p=501984&postcount=17[/URL]
TF & P-1 optimization/tradeoff with each other and primality testing [URL]http://www.mersenneforum.org/showpost.php?p=488897&postcount=12[/URL]
What's a good P-1 factoring strategy? Best? [URL]https://www.mersenneforum.org/showpost.php?p=531129&postcount=20[/URL]
Or just go with one less TF bit level, and always use full PrimeNet P-1 bounds.)

Runtime Error 2020-05-11 02:22

haha trial factorer go burrrrr
 
[QUOTE=chalsall;544973]I've just activated your account.[/QUOTE]
Again, thanks. I had anticipated potentially spending a couple hours figuring it out, but wow that was super easy. I really appreciate how streamlined it is! [I]haha trial factorer go burrrrr[/I]

A few of questions:

1) How can I delete a Notebook Access Key instance on GPU72, along with the associated work? I created too many and I'd like to return the assignments before they become overdue.

2) Is there a way to request to TF exponents at the 332M prize wave front? (see kriesel’s above post; that exchange was the impetus for me to join the TF effort)

3) Is there a way to request consecutive bit levels on the same exponents? [I]e.g.[/I] take something from 73 to 81 instead incremental bit levels on different exponents? It may sound silly, but it would make me feel more ownership over and attachment to the exponents that I’m factoring, and it would add to the excitement that I feel when finding factors. ([I]omg it spent a day factoring it and just before my session expired… BOOM!!1[/I]) My pitch: You (chalsall) have said a few times that you like to give folks agency over their contribution 😉.

4) A request for anecdotal evidence (the plural of which is [I]data[/I]): How do you get a T4?

5) Any tips on writing a Windows-runable script that launches a Colab instance (and hits "connect" + "run") every 24 hours? I've done similar things w/ Python, but I imagine someone has a more elegant solution.

Uncwilly 2020-05-11 03:29

1 Attachment(s)
[QUOTE=Runtime Error;545054]4) A request for anecdotal evidence (the plural of which is [I]data[/I]): How do you get a T4?[/QUOTE]When you see that you have been assigned a P4 or K80, interrupt the execution. Then "factory reset runtime". Restart the code. If you get a T4, let it run. If you get a P100, you might want to let it run. Else, repeat. Wend.

And once you get one. Then do the trick of "Connect to hosted runtime"

Uncwilly 2020-05-11 15:05

It looks like my MISFIT is not able to get assignments in the 332M range. I am not in front of the machine in question, nor did I have time to investigate the issue. It still has days of assignments available, so no rush.

chalsall 2020-05-11 16:29

[QUOTE=Runtime Error;545054]Again, thanks. I had anticipated potentially spending a couple hours figuring it out, but wow that was super easy. I really appreciate how streamlined it is! [I]haha trial factorer go burrrrr[/I][/QUOTE]

Cool. Glad to get the feedback.

[QUOTE=Runtime Error;545054]1) How can I delete a Notebook Access Key instance on GPU72, along with the associated work? I created too many and I'd like to return the assignments before they become overdue.[/QUOTE]

I don't currently have a way to delete Access Keys, but I probably will need to add that. As well as "merging" Access Keys and CPUs, much like Primenet.

However, don't worry about taking too many assignments. After an instance is terminated, any assignments which have not been worked are returned to the pool within half an hour. Those who have had work done on them are held until you spin up another instance.

[QUOTE=Runtime Error;545054]2) Is there a way to request to TF exponents at the 332M prize wave front? (see kriesel’s above post; that exchange was the impetus for me to join the TF effort)[/QUOTE]

There is now. I've just added two new work types to the [URL="https://www.gpu72.com/account/instances/"]Instances Edit Form[/URL] (click on the links under the work-type columns), "LMH Trial Factoring (Breadth First)" (going to 76 bits) and "LMH Trial Factoring (Depth First)" (going to 78 bits).

I would appreciate feedback as to how deep people want to take these? 81 bits would take a while, but that's approaching optimal.

[QUOTE=Runtime Error;545054]3) Is there a way to request consecutive bit levels on the same exponents? [I]e.g.[/I] take something from 73 to 81 instead incremental bit levels on different exponents? It may sound silly, but it would make me feel more ownership over and attachment to the exponents that I’m factoring, and it would add to the excitement that I feel when finding factors. ([I]omg it spent a day factoring it and just before my session expired… BOOM!!1[/I]) My pitch: You (chalsall) have said a few times that you like to give folks agency over their contribution 😉.[/QUOTE]

I hear what you're saying. But that would be a royal pain in the butt for me to have the server manage.

As a compromise, if you choose "LMH Depth First" you will consecutively get the lowest available candidate to take up to the next level. So while you won't "own" a particular candidate, you will be contributing to the first candidates to be released back to Primenet for LL assignment.

[QUOTE=Runtime Error;545054]4) A request for anecdotal evidence (the plural of which is [I]data[/I]): How do you get a T4?[/QUOTE]

1. *Don't* be on the paid tier.

2. Patiently do the Factory Reset cycling trick until you get lucky.

[QUOTE=Runtime Error;545054]5) Any tips on writing a Windows-runable script that launches a Colab instance (and hits "connect" + "run") every 24 hours? I've done similar things w/ Python, but I imagine someone has a more elegant solution.[/QUOTE]

I would advise against this. For some reason, one of my accounts gets hit with the Recaptcha challenge every time I interact with it, so Google is aware some are going to be trying to automate this.

chalsall 2020-05-11 16:31

[QUOTE=Uncwilly;545093]It looks like my MISFIT is not able to get assignments in the 332M range. I am not in front of the machine in question, nor did I have time to investigate the issue. It still has days of assignments available, so no rush.[/QUOTE]

Your MISFIT was asking for work to take up to 74 bits. I have added a conditional such that it bumps up the pledge level to 76.

chalsall 2020-05-11 19:11

[QUOTE=Uncwilly;545058]When you see that you have been assigned a P4 or K80, interrupt the execution. Then "factory reset runtime". Restart the code. If you get a T4, let it run. If you get a P100, you might want to let it run. Else, repeat. Wend.[/QUOTE]

Personally I won't settle for anything less than a P100. Not worth my (free) time allotment. (I can't believe I just wrote that... :smile:)

To speed things up, you don't even have to interrupt the executing Notebook. Just Factory Reset with your mouse, and then "Cntl-F9" to run all the Sections (usually, but annoyingly not /aways/, in order from the top).

It can take up to five such cycles, but you will /eventually/ get something better than a P4 (~98% of the time).

Runtime Error 2020-05-11 21:58

Thank you for the helpful replies!

I've currently got two accounts using two notebooks each, running on the same machine (using different browsers). IP address doesn't seem to matter, nor does phone number on the associated account. It seems easy enough to get P100s, and I'll probably stick with them since T4s seem rare, although I did manage to get one!

[QUOTE=chalsall;545100]1. *Don't* be on the paid tier.

2. Patiently do the Factory Reset cycling trick until you get lucky.

[/QUOTE]

Wait, they give paid users lower quality equipment?

chalsall 2020-05-12 00:27

[QUOTE=Runtime Error;545121]Wait, they give paid users lower quality equipment?[/QUOTE]

Yup. We don't understand it either...

BTW, for those really serious about LMH work... I've added another work-type: Nominal. This used to be called "Depth", and goes to 78 bits. What is now called "Depth" goes to 81 bits, which is where we'll release back to Primenet at.

But, be aware this takes a long time! 17.5 hours on a P100:[CODE]

20200512_001400 ( 3:54): Starting trial factoring M332193xxx from 2^80 to 2^81 (737.12 GHz-days)
20200512_001404 ( 3:54): Exponent TF Level % Done ETA GHzD/D Itr Time | Class #, Seq # | #FCs | SieveRate | SieveP
20200512_001408 ( 3:54): 332193xxx 80 to 81 0.1% 17h31m 1008.85 65.759s | 0/4620, 1/960 | 393.85G | 5989.4M/s | 82485
20200512_001611 ( 3:56): 332193xxx 80 to 81 0.3% 17h29m 1007.84 65.825s | 12/4620, 3/960 | 393.85G | 5983.4M/s | 82485
20200512_001718 ( 3:57): 332193xxx 80 to 81 0.4% 17h28m 1007.90 65.821s | 24/4620, 4/960 | 393.85G | 5983.7M/s | 82485
20200512_001823 ( 3:58): 332193xxx 80 to 81 0.5% 17h28m 1007.27 65.862s | 25/4620, 5/960 | 393.85G | 5980.0M/s | 82485
20200512_001930 ( 3:59): 332193xxx 80 to 81 0.6% 17h26m 1007.61 65.840s | 28/4620, 6/960 | 393.85G | 5982.0M/s | 82485
20200512_002035 ( 4:00): 332193xxx 80 to 81 0.7% 17h27m 1005.67 65.967s | 33/4620, 7/960 | 393.85G | 5970.5M/s | 82485
20200512_002142 ( 4:01): 332193xxx 80 to 81 0.8% 17h25m 1006.49 65.913s | 37/4620, 8/960 | 393.85G | 5975.4M/s | 82485
20200512_002247 ( 4:03): 332193xxx 80 to 81 0.9% 17h25m 1005.47 65.980s | 40/4620, 9/960 | 393.85G | 5969.3M/s | 82485
[/CODE]

It will be interested to see how long a P-1 run is going to take on Colab instances... I'll try that tomorrow.

Uncwilly 2020-05-12 02:37

[QUOTE=chalsall;545101]Your MISFIT was asking for work to take up to 74 bits. I have added a conditional such that it bumps up the pledge level to 76.[/QUOTE]

I fixed my MISFIT. I just need to work to get it to keep less in queue. And on another machine MISFIT does not seem to fetch. It has let the queue empty.

kladner 2020-05-12 05:57

I recently changed an 8 core i7 machine from running 2 P-1 workers with 4 cores, to 4 workers with 2 cores. In My GPU72 assignments and completed pages I only see the machine name and (2), (3), (4). When I noticed and checked the machine, which is doing nothing else at the moment, and found all 4 workers chugging away. I had it call PrimeNet just to make sure that was working.
I wonder if M_ (1)'s work is being credited. :confused2:

Runtime Error 2020-05-16 03:01

Unfinished 81-bit Colab jobs
 
[QUOTE=chalsall;545131]Yup. We don't understand it either...

BTW, for those really serious about LMH work... I've added another work-type: Nominal. This used to be called "Depth", and goes to 78 bits. What is now called "Depth" goes to 81 bits, which is where we'll release back to Primenet at.

But, be aware this takes a long time! 17.5 hours on a P100[/QUOTE]

Thanks for adding this. I've been trying some of the 81 bit level jobs. It seems that a notebook instance likes to first start a new exponent upon launch, and if it finishes, it will move on to any partially completed jobs. But unless it gets a T4, it will not finish the job within 12 hours. I currently have a handful of partially completed 81-bit jobs, and this evening's notebooks just started fresh exponents. Do you have any advice? Thank you!

chalsall 2020-05-16 15:40

[QUOTE=Runtime Error;545507]It seems that a notebook instance likes to first start a new exponent upon launch, and if it finishes, it will move on to any partially completed jobs. But unless it gets a T4, it will not finish the job within 12 hours. I currently have a handful of partially completed 81-bit jobs, and this evening's notebooks just started fresh exponents. Do you have any advice?[/QUOTE]

A new instance will first be re-issued work partially completed, in descending order. However, an assignment is only re-issued after no updates for 30 minutes.

This can lead to a bit of a "queue" if someone does the "Factory Reset" trick a few times -- anything which hasn't had work done on it is thrown back into the pool, but if the instance isn't reset within two minutes it will report back some progress, and then the candidate is held until completion.

My advice is to just stick with it -- all work will (eventually) be completed.

P.S. Oh, also... I set up a P-1 assignment for myself in 332M as a test. 17 days on the lone CPU core... I don't think it will make sense to make this worktype available to the Colab instances.

Runtime Error 2020-05-16 16:37

[QUOTE=chalsall;545531]A new instance will first be re-issued work partially completed, in descending order. However, an assignment is only re-issued after no updates for 30 minutes.

This can lead to a bit of a "queue" if someone does the "Factory Reset" trick a few times -- anything which hasn't had work done on it is thrown back into the pool, but if the instance isn't reset within two minutes it will report back some progress, and then the candidate is held until completion.

My advice is to just stick with it -- all work will (eventually) be completed.

P.S. Oh, also... I set up a P-1 assignment for myself in 332M as a test. 17 days on the lone CPU core... I don't think it will make sense to make this worktype available to the Colab instances.[/QUOTE]

Got it. I've been cycling until I get P100s, that makes sense. Thanks!

And wow 17 days = 34+ days with the time limitations. Ouch!

chalsall 2020-05-17 17:19

[QUOTE=Runtime Error;545534]Got it. I've been cycling until I get P100s, that makes sense. Thanks![/QUOTE]

So, I've been thinking about how to handle these long-running tasks (and the restarting of same) in a better way.

One thing which could be done immediately would be to drop the number of assigned tasks per instance down to two, instead of three. The chances of a factor being found immediately after the start of the next job are very, very small, so mfaktc would (almost) never run out of work.

However, something to put out there... It would also be possible to only assign a single job at a time. The downside to this is there would be about 30 seconds of wasted compute between a job being finished, the next job being fetched, and mfaktc starting up again (along with the short self-test).

Thoughts? Perhaps make this optional, on a per-instance basis?

axn 2020-05-17 17:45

[QUOTE=chalsall;545633]The downside to this is there would be about 30 seconds of wasted compute between a job being finished, the next job being fetched, and mfaktc starting up again (along with the short self-test).[/QUOTE]
The horror, the horror!

James Heinrich 2020-05-17 17:58

I think the benefit of having fewer half-done assignments hanging around is well worth losing a minute a day or thereabouts.

Runtime Error 2020-05-17 18:51

[QUOTE=chalsall;545633]So, I've been thinking about how to handle these long-running tasks (and the restarting of same) in a better way.

One thing which could be done immediately would be to drop the number of assigned tasks per instance down to two, instead of three. The chances of a factor being found immediately after the start of the next job are very, very small, so mfaktc would (almost) never run out of work.

However, something to put out there... It would also be possible to only assign a single job at a time. The downside to this is there would be about 30 seconds of wasted compute between a job being finished, the next job being fetched, and mfaktc starting up again (along with the short self-test).

Thoughts? Perhaps make this optional, on a per-instance basis?[/QUOTE]

Sweet! I currently have 9 in-progress assignments at the 81-bit level. One of them has been stuck at 95% for a few days, presumably due to me cycling. The one-assignment per notebook rule for 81-bits would be welcomed!

Also, I just got kicked out of Colab after 1hr30min :sad:

Uncwilly 2020-05-18 03:47

GPUto72 is not seeing (in the stats) the factor found up in the 332M range on one of my colab sessions.

kladner 2020-05-18 04:10

[QUOTE=Runtime Error;545646]Sweet! I currently have 9 in-progress assignments at the 81-bit level. One of them has been stuck at 95% for a few days, presumably due to me cycling. The one-assignment per notebook rule for 81-bits would be welcomed!

[U]Also, I just got kicked out of Colab after 1hr30min [/U]:sad:[/QUOTE]
That hasn't happened to me, yet. However, I am feeling that "THEY" are onto my dodges around usage limits. The GPU Nazi is giving me a lot of "No GPU for You!" I only seem to be able to run GPUs on my paid account, and that has shot its wad, at the moment.

EDIT: But when I turned off GPU in the settings of all the notebooks, and then tried running, I got 4 CPU/P-1 instances running with RAM showing 'BUSY' on all of them. I think I was getting more push-back when I left GPUs enabled, when I knew it wasn't going to let me have them. There were more 'excess sessions' warnings and it seemed the system would only allow one. This preemptive disabling of GPUs might help with free accounts, too. I can usually run two notebooks on free accounts. Trying to run multiple accounts simultaneously has not worked out for me at least, so 2 (free) or 4 (paid) notebooks are the limits in my experience. I would be happy to hear if others have gotten more going. (I know that chalsall and others are running multiple instances through VPNs and other more abstruse means, but I am not running in that class so far. Ignorance and laziness stand in the way.)

chalsall 2020-05-18 20:37

[QUOTE=Runtime Error;545646]The one-assignment per notebook rule for 81-bits would be welcomed![/QUOTE]

OK, version 0.423 is now in production. Does the one assignment at a time thing.

A bit of wasted compute (as a %) for the shorter running jobs, but I'm too busy at the moment on other things to make this smarter. I have mapped out a solution of giving the next assignment a few minutes before the first expires, but it will take a little while to implement.

chalsall 2020-05-18 20:50

[QUOTE=Uncwilly;545684]GPUto72 is not seeing (in the stats) the factor found up in the 332M range on one of my colab sessions.[/QUOTE]

Copy. Will drill down in the next 48 hours.

Uncwilly 2020-05-18 23:19

[QUOTE=chalsall;545767]Copy. Will drill down in the next 48 hours.[/QUOTE]
Specifically I should have said the graph.

Dylan14 2020-05-19 16:46

One thing I noticed on the View Assignments page is that the checkbox that is used to extend/release assignments only appears on non-Colab assignments. Is this an intentional thing, or is that a bug?

chalsall 2020-05-20 14:34

[QUOTE=Dylan14;545861]Is this an intentional thing, or is that a bug?[/QUOTE]

Intentional.

There's a bit of additional back-end stuff associated with a Colab assignment, so rather than writing the code which deals with that in the case of deletions I just don't let people delete.

This assumes that people will regularly connect with Colab to complete assignments. Those which have been abandoned (read: more than a week or so old) I just move over to my own account to finish off.

LaurV 2020-05-21 15:52

[QUOTE=chalsall;545965]I just move over to my own account to finish off.[/QUOTE]
Do you also give them the credit? If they did 57 classes from 960, and you do the rest, you should give them 5.93% of the credit... :razz:
(see how you explain that to google, and to primenet, hihi)

Chuck 2020-05-31 11:59

Notebook results not being auto-submitted
 
For the last day my notebook results have not been auto-submitted. I've done two manual submissions so far.

chalsall 2020-05-31 18:25

[QUOTE=Chuck;546872]For the last day my notebook results have not been auto-submitted. I've done two manual submissions so far.[/QUOTE]

There was a bit of a weird cert problem on Mersenne.org yesterday, so I turned off a bunch of the spiders while Aaron worked the issue. Forgot to turn the submission spider back on.

Chuck 2020-05-31 20:29

[QUOTE=chalsall;546890]There was a bit of a weird cert problem on Mersenne.org yesterday, so I turned off a bunch of the spiders while Aaron worked the issue. Forgot to turn the submission spider back on.[/QUOTE]

Thanks. Working fine again.

storm5510 2020-07-25 15:06

1 Attachment(s)
Just in case anyone might be interested, below is my [I]Misfit[/I] workaround batch file. [I]Misfit[/I] itself, gave me some real problems so I stopped using it. The batch file only reserves. It does not submit results. This would need to be done manually. Even so, it keeps the process running.

[CODE]@echo off

rem Auto fetch from GPU72.
rem Press Ctrl-C to stop batch.
rem Restart mfaktc-2047 manually
rem to complete assigned work, if any.

:top
cls
call mfaktc-2047.exe
timeout 2

if exist c:\mfaktc\GPU72FETCH_logs\*.html (
del c:\mfaktc\GPU72FETCH_logs\*.html
)

call c:\gpu72\gpu72workfetch c:\gpu72\gpu72config.txt
if %errorlevel% gtr 1 goto errHandler
echo.
goto top
:errHandler
timeout 60
goto top[/CODE]Note the use of the "call" statement. This returns control to the batch file once the following process is complete. The "GPU72" folder was a leftover. It contains all the binaries needed to complete the fetch. I gave [I]mfaktc[/I] the custom name to distinguish it from older versions which I cannot run on one of my machines. This batch needs to reside in the same folder as [I]mfaktc[/I].

The contents of the GPU72 folder are in the attached zip archive. The contents of [U]GPU72config.txt[/U] will need to be edited to add credentials and make other changes, if needed.

chalsall 2020-07-26 17:54

GPU72 -- Time to rethink things...
 
Hey all.

OK, because of some [URL="https://mersenneforum.org/showthread.php?t=25638"]very cool work by Mihai, George, et al[/URL] we need to rethink where the "optional" TF'ing level is. Basically, we will soon be at the point where DC'ing will no longer be needed to verify that a Candidate is composite.

This means that instead of a factor-finding TF'ing run saving two LL tests, it instead will only be saving one (plus the much less expensive Cert run).

So, instead of GPU72 targeting 77 bits (we should really have been going to 78), we'll soon start releasing at 76. This should be approximately "optimal" according to James' [URL="https://www.mersenne.ca/cudalucas.php?model=706"]economic cross-over analysis[/URL] charts. (Although some cards should still be going higher, such as [URL="https://www.mersenne.ca/cudalucas.php?model=745"]Compute v7.5 kit[/URL].)

Because the transition to the VDF versions of Prime95 etc will take a while, and because I like things to be neat, I'd like to finish taking 9xM up to 77 bits (only ~5,600 to go). For the 10xM and 11xM ranges, we'll release at 76 when the demand requires it.

To be honest, this comes at a very timely moment. Ben is screaming through the candidates, and it looked like we'd need to start releasing sub-optimally anyway.

Also, tangentially... Although Reb and KEP demonstrated empirically that BOINC could provide a serious amount of GPU resources, I simply haven't had the time to take the GPU72 BOINC system past alpha. I've been dragged into a huge and mission-critical project which will prevent me from doing anything but maintenance on GPU72 until at least November. (Can't talk about it yet; I can't even count on one hand how many NDA's I'm under.)

Lastly... While we're now under less of a crunch, we could still use some more TF'ing firepower to build up a bit more of a buffer to 76 bits. Anyone who has any GPUs they might throw our way, it would be appreciated (even if only going to 74 or 75).

Or, for those doing the GPU72 TF Colab Notebook thing, if you're running less than four accounts, perhaps consider creating a few more (and/or, if you're in the USA, consider spending the $10 a month for the Pro access (much cheaper than buying your own kit))... :smile:

Thoughts? Comments? Suggestions?

And, as always, thanks for all the cycles everyone! :tu:

VBCurtis 2020-07-26 19:41

[QUOTE=chalsall;551680]....
So, instead of GPU72 targeting 77 bits (we should really have been going to 78), we'll soon start releasing at 76. .....

Thoughts? Comments? Suggestions?

And, as always, thanks for all the cycles everyone! :tu:[/QUOTE]

If we previously really should have been going to 78, shouldn't we now reallyshould go to 77? The LL test work is not quite cut in half because of CERT work, so we should move down less than one bit. Combine with exponent range increasing to 10xM from 9xM, and it seems like changing nothing might be closest to optimal?

Or is the need to keep the queue fed too great to have the luxury of TF77?

chalsall 2020-07-26 19:54

[QUOTE=VBCurtis;551684]If we previously really should have been going to 78, shouldn't we now really should go to 77? The LL test work is not quite cut in half because of CERT work, so we should move down less than one bit.[/QUOTE]

Nope. Where James shows we should be going for DC (the yellow/orange line) is now where we should be going for VDF tests. Remember that James' analysis includes the percentage of triple tests needed.

[QUOTE=VBCurtis;551684]Or is the need to keep the queue fed too great to have the luxury of TF77?[/QUOTE]

Basically.

I had modeled that I would have the GPU72 BOINC system online in time, but then this project was dumped on me. (As usual, I was brought in when it was already in crisis; impossible (but politically unmovable) deadline...)

James Heinrich 2020-07-26 20:09

[QUOTE=VBCurtis;551684]The LL test work is not quite cut in half because of CERT work, so we should move down less than one bit.[/QUOTE]Actually I think it might be slightly better than that. I believe the current assumption is that a DC costs 105% the first test (same effort but 1:20 will need a triple-check). If I read the numbers right a 1st-time PRP needs 100.2% and the check on the cert is 0.2%, perhaps 0.205% if we need TC. If that holds then LL = 205% and PRPcert = 100.4% = 0.4898 LL
I'm ignoring the possibility that the original PRP might be wrong and need redoing, I'm not sure if that's likely with the current error checking(?). I suppose it might be, but that pushes up back to ~0.5, at least close enough not to worry about the difference.

But, also bear in mind that while this is an exciting new idea, it's not quite yet implemented, and even when George/Mihai get out of testing and publish their updated software a large bulk of the installed base will be running old versions of Prime95/etc, it will take years before this becomes the dominant calculation.

As always, I expect the target bitlevel will be more influenced by available TF firepower than what is mathematically "optimal".

storm5510 2020-07-27 00:28

The Spider
 
[QUOTE]Also, while this was created under Linux, it should be possible to make it work under Windows without too much effort. Feedback on what is needed to do so appreciated. [/QUOTE]I just pulled in the spider and changed what was needed. I am using [I]Strawberry Perl[/I] under [I]Windows 10 Pro[/I] and have written a few short scripts with it. So, I understand parts of the spider. There should not be any language differences which would keep it from running with Strawberry. I will simply have to try it and see what it does. Just to be safe, I plan to make a copy of my results and place it in another folder.

storm5510 2020-07-27 01:39

The Spider
 
I tried it and here is what it did:

[QUOTE]20200727_003523 INFO: Submission spider starting...
20200727_003523 INFO: Attempting to log into Primenet. This can take a little while...
Use of uninitialized value $Line in pattern match (m//) at C:\mfaktc\spider.pl line 147.[/QUOTE] The third line above, there were hundreds of these. I had to Ctrl-C to stop it. This may be because I had to comment the "exit" command to keep the window open so I could read it. I had the [I]mfaktc[/I] console open and ran the spider from there. It flashed a second console for an instant. I needed a workaround.

[CODE]Line 142: my ($Line) = $_;

Line 147: if ($Line =~ /no factor for (M\d*)/ || $Line =~ /(M\d*) has a factor/
|| $Line =~ /(M\d*) no factor from/) {
[/CODE]Line 142 is the declaration. It is different from what Strawberry would use. If it has no immediate value, then Strawberry might expect "my $Line;"

Line 147 tries to use $Line but cannot. "Uninitialized" it says. This would point back to line 142. Perhaps Strawberry does [U]not[/U] understand the declaration.

KEP 2020-07-29 15:37

[QUOTE=chalsall;551680]Also, tangentially... Although Reb and KEP demonstrated empirically that BOINC could provide a serious amount of GPU resources, I simply haven't had the time to take the GPU72 BOINC system past alpha.[/QUOTE]

I'm glad to read that you are going BOINC. SRBase is still doing TF but it is breadth first and there is no way to change that. Is there none of the excellent guys that have set up projects previously who can help you get BOINC going, well before november?

At november SRBase will most likely have cleared a big chunck of the 73-74 bit level and next year, I expect (if the firepower remains) that SRBase will have cleared all n's to at least 76 bits (hopefully more). With your BOINC project there will at least be a breadth first and much more emminent wavefront first :smile: Really looking forward to see your BOINC project :smile:

An advice in regards to BOINC, remember badges as seen at Primegrid and remember to have enough badges, such that you can maintain the badge chasing users :smile:

chalsall 2020-07-29 16:02

[QUOTE=KEP;551883]Is there none of the excellent guys that have set up projects previously who can help you get BOINC going, well before november?[/QUOTE]

If there are, please step forward! :smile:

I'd provide an independent virtual machine, and handle the GPU72 side of the IPC. The VM, Software stack, IP, DNS, "trust", etc is already set up. I just need someone who's done BOINC before to set up the work-types and the badges. (And then extensive beta testing, of course.)

[QUOTE=KEP;551883]An advice in regards to BOINC, remember badges as seen at Primegrid and remember to have enough badges, such that you can maintain the badge chasing users :smile:[/QUOTE]

Appreciate the knowledge. And yup, already modeled. Also, the credits need to be modeled to reward wave-front work, while at the same time not being inflationary.

chalsall 2020-07-29 16:22

[QUOTE=storm5510;551711]Line 142 is the declaration. It is different from what Strawberry would use. If it has no immediate value, then Strawberry might expect "my $Line;"

Line 147 tries to use $Line but cannot. "Uninitialized" it says. This would point back to line 142. Perhaps Strawberry does [U]not[/U] understand the declaration.[/QUOTE]

Hmmm... Perhaps... Or, $_ doesn't have a value. Can't determine without looking at the full script, and possibly running with test data.

I don't have any time at the moment to drill down. But I'll program to look at that over the weekend on my Wincows development box.

storm5510 2020-07-29 16:55

[QUOTE=chalsall;551889]Hmmm... Perhaps... Or, $_ doesn't have a value. Can't determine without looking at the full script, and possibly running with test data...[/QUOTE]

I tried changing the declaration to [I]my $Line; [/I]Doing so had no affect. [I]Perl [/I]can be more than a bit ambiguous with its error reporting. Just because it says there is an error on line x, it might not be on that particular line, but somewhere above. I spent an hour looking at that small area of the script. I didn't see anything obvious.

The very top line, appearing as a comment, references Linux folders. I didn't change it. If it is just a comment, then it should have no affect. Another item, appearing in (Windows) Strawberry scripts, is [I]$|=1; [/I]What this does, I haven't a clue. It appears below the [I]use[/I] statements.

KEP 2020-07-29 17:59

[QUOTE=chalsall;551885]Appreciate the knowledge. And yup, already modeled. Also, the credits need to be modeled to reward wave-front work, while at the same time not being inflationary.[/QUOTE]

Primegrid uses these "bonus" steps to push the interest in the conjectures with very long workunits:

This project has a 10% long job credit bonus and a 10% conjecture credit bonus. (Extended Sierpinski Problem)

This project has a 35% long job credit bonus and a 10% conjecture credit bonus. (Prime Sierpinski Problem)

This project has a 50% long job credit bonus and a 10% conjecture credit bonus. (Seventeen Or Bust)

This project has a 10% conjecture credit bonus. (Sierpinski Riesel base 5)

This project has a 10% long job credit bonus and a 10% conjecture credit bonus. (The Riesel Problem)

Maybe these direct copy paste from PG, can offer you a hint to non inflated wavefront credit. The reason behind the "inflated" credit, was to stimulate users to offer ressources they might otherwise spend on non conjecture work or a lot of small n testing. It sure did seem to help the conjectures with these bonuses and I sure do expect the much needed 77-78 bit work would benefit from a 50-60% long duration bonus :smile: If you have any questions or need any suggestions or experienced advice, feel free to reach out.

chalsall 2020-07-29 18:18

[QUOTE=KEP;551910]If you have any questions or need any suggestions or experienced advice, feel free to reach out.[/QUOTE]

Copy. Thanks. :tu:

bayanne 2020-07-30 04:56

Can't seem to grab any exponents from the 332000000 range and up at present, although there seem to be plenty available.

Can you advise?

bayanne 2020-07-30 09:47

[QUOTE=bayanne;551974]Can't seem to grab any exponents from the 332000000 range and up at present, although there seem to be plenty available.

Can you advise?[/QUOTE]

Now been able to grab 77 bit exponents

chalsall 2020-07-30 14:32

[QUOTE=bayanne;551989]Now been able to grab 77 bit exponents[/QUOTE]

Yup. The work to 76 is exhausted, so 77 is the next "cheapest" available WU.

The web-based UI was updated shortly before the 76 work was fully assigned. The Colab select statement is FactTo ascending, Exponent ascending, so it should have given out the last score of work to 76, and then automatically moved to 77.

BTW... Since very few people were actually going to 81, I've set GPU72 to release 332M candidates at 79 bits. Those who are serious about this kind of work and are using Colab might consider choosing "LMH Depth First", to get them off our books.

chris2be8 2020-07-30 15:43

[QUOTE=storm5510;551895]Another item, appearing in (Windows) Strawberry scripts, is [I]$|=1; [/I]What this does, I haven't a clue. It appears below the [I]use[/I] statements.[/QUOTE]

From man perlvar (on Linux, but should apply to any OS):
[quote]
HANDLE->autoflush( EXPR )
$OUTPUT_AUTOFLUSH
$| If set to nonzero, forces a flush right away and after every write or print on the currently selected output channel. Default is 0
(regardless of whether the channel is really buffered by the system or not; $| tells you only whether you've asked Perl explicitly to
flush after each write). STDOUT will typically be line buffered if output is to the terminal and block buffered otherwise. Setting this
variable is useful primarily when you are outputting to a pipe or socket, such as when you are running a Perl program under rsh and want
to see the output as it's happening. This has no effect on input buffering. See "getc" in perlfunc for that. See "select" in perlfunc
on how to select the output channel. See also IO::Handle.

Mnemonic: when you want your pipes to be piping hot.
[/quote]

Chris

storm5510 2020-07-30 16:53

[QUOTE=chris2be8;552026]From man perlvar (on Linux, but should apply to any OS):

Chris[/QUOTE]

In my experience with Strawberry (Windows), it "$|=1;" did not work this way. It would hold on to the output until I stopped the script. Then, it would dump to the file. I did it like this:

[QUOTE][COLOR=Gray]open my $pl,'>>','plist.txt';[/COLOR]
$pl->autoflush;[/QUOTE]It would do file writes at what appeared to be regular intervals. There has to be differences between Linux Perl and Windows Perl. This is probably one of many.

chalsall 2020-07-30 17:05

[QUOTE=storm5510;552031]There has to be differences between Linux Perl and Windows Perl. This is probably one of many.[/QUOTE]

Please trust me on this: MANY differences. It's a bit like Java. "Write once, debug everywhere."

Forget about using fork(), unless you're short-running and can simply accept memory leakage. Etc, etc, etc...

storm5510 2020-07-30 17:31

[QUOTE=chalsall;552032]Please trust me on this: MANY differences. It's a bit like Java. "Write once, debug everywhere."

Forget about using fork(), unless you're short-running and can simply accept memory leakage. Etc, etc, etc...[/QUOTE]

As I suspected...

I've looked at the spider quite a bit. Most of the code, I cannot grasp. It goes way beyond my [I]Perl[/I] skill level. On the surface, it seems too complex, as in trying to do many things. I wonder if all of that is really necessary.

chalsall 2020-07-30 17:56

[QUOTE=storm5510;552038]I've looked at the spider quite a bit. Most of the code, I cannot grasp. It goes way beyond my [I]Perl[/I] skill level. On the surface, it seems too complex, as in trying to do many things. I wonder if all of that is really necessary.[/QUOTE]

LOL... If I may please share...

IMO, Perl has a bad rap along the lines of "Write once, read never". Mostly because of the tight tying of [URL="https://en.wikipedia.org/wiki/Regular_expression"]Regular Expressions[/URL].

Perl really is the "Internet's Duct-tape". It does Strings in its sleep, and makes gluing software components together trivial.

The learning-curve is perhaps a bit steep, but it's not vertical. And knowing how to use it is very empowering.

Axiom: Always choose the best tool for the job. The more tools you have, the less work you yourself will do. :smile:

PhilF 2020-07-30 19:35

[QUOTE=chalsall;552040]Perl really is the "Internet's Duct-tape". It does Strings in its sleep, and makes gluing software components together trivial.[/QUOTE]

Sounds like you are describing awk:

[STRIKE]Perl[/STRIKE] Awk really is "[STRIKE]Internet[/STRIKE] Unix's Duct-tape". It does Strings in its sleep, and makes gluing software components together trivial. :smile:

chalsall 2020-07-30 19:52

[QUOTE=PhilF;552047]Sounds like you are describing awk:[/QUOTE]

Hey, James and I argue enough about what the P in LAMP should really mean. (And then along came Python, to crash the party (and enforce indentation)...) :wink:

James Heinrich 2020-07-30 19:55

[QUOTE=chalsall;552049]Hey, James and I argue enough about what the P in LAMP should really mean[/QUOTE]There's no argument. I was just lurking and keeping myself quietly out of trouble... :hello:

PhilF 2020-07-30 20:25

[QUOTE=James Heinrich;552050]There's no argument. I was just lurking and keeping myself quietly out of trouble... :hello:[/QUOTE]

Oh Oh... I'm in trouble. I'm a LEMP guy. :blahblahblah:

chalsall 2020-07-30 20:31

[QUOTE=PhilF;552051]Oh Oh... I'm in trouble. I'm a LEMP guy. :blahblahblah:[/QUOTE]

Socialist!!! :wink:

My rule is, I don't care what you use; just get the job done. :smile:

James Heinrich 2020-07-30 20:40

[QUOTE=PhilF;552051]Oh Oh... I'm in trouble. I'm a LEMP guy. :blahblahblah:[/QUOTE]Don't worry, I won't yell at you. [COLOR="Silver"](my server also runs nginx, but don't tell Chris)[/COLOR]

PhilF 2020-07-30 21:22

[QUOTE=James Heinrich;552053]Don't worry, I won't yell at you. [COLOR="Silver"](my server also runs nginx, but don't tell Chris)[/COLOR][/QUOTE]

Chris who? :confused2: :cool:

chalsall 2020-07-30 21:34

[QUOTE=PhilF;552060]Chris who? :confused2: :cool:[/QUOTE]

Chris Halsall. Sounds like Hassel, but spelled differently.

I hope you'll all forgive me for this, but I enjoy the banter in a less formal space.

Because there are so many Chris' named after a certain important person born around when I was, I find a huge amount of "name-space collision". In one company I was involved with there were six (6#) Chris'. We enumerated ourselves; I was Chris.2.

Whenever I enter a Video Conference Bridge I'm Halsall. The other Chris' are other people. :smile:

Prime95 2020-07-30 23:41

[QUOTE=chalsall;552063]
Because there are so many Chris' named after a certain important person born around when I was:[/QUOTE]

Don't take this the wrong way, probably more a reflection on my aging brain, but I can't think of a single important person named Chris.

Uncwilly 2020-07-31 00:25

Christopher Walken
Christopher Columbus Kraft
Christopher Reeve
Chris Christie
Chris Hadfield
Cristobal Colon

storm5510 2020-07-31 01:13

LEMP. I thought this was a new kind of "weed." :missingteeth:

[I]Perl[/I] is like duct tape. It will stick to one specific task. Try to modify a script to do something slightly different, it's better to start from scratch. Otherwise, it can descend into a fur-ball quickly.

Prime95 2020-07-31 02:53

[QUOTE=Uncwilly;552074]Christopher Walken
Christopher Columbus Kraft
Christopher Reeve
Chris Christie
Chris Hadfield
Cristobal Colon[/QUOTE]

Of that list, the only one I came up with was Chris Christie -- and I didn't think many babies were names after him :)

I also thought of Chris Wallace.

So you beat me, 6 to 2. Well done.

Still, who is the mystery man chalsall is named after????

Aramis Wyler 2020-07-31 03:04

I don't think that he meant that people were named after a Chris born at the same time as him, but that there were a lot of people born at the same time as him named after an important Chris, who I would guess is Christophe Columbus, Admiral of the Ocean Sea, Viceroy, and Governor of the Indies.

chalsall 2020-07-31 03:20

[QUOTE=Aramis Wyler;552086]I don't think that he meant that people were named after a Chris born at the same time as him, but that there were a lot of people born at the same time as him named after an important Chris, who I would guess is Christophe Columbus, Admiral of the Ocean Sea, Viceroy, and Governor of the Indies.[/QUOTE]

Closer than most guesses...

The joke in the family is I was named Christopher, in the hopes I would be Christ Like. My levitating (mostly by repeatedly blowing myself up experimenting with accelerants) quickly suggested my mother's wishes would not be delivered.


All times are UTC. The time now is 06:41.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.