![]() |
|
|
#188 |
|
P90 years forever!
Aug 2002
Yeehaw, FL
7,537 Posts |
Aga seems to worry about bogus results being returned from a modified client to pad one's stats.
I and I think Scott are worried about runaway clients eating through all the available exponents in a hurry. Both concerns are valid, we will have to carefully examine each message that can be sent to the server and look for ways to minimize or eliminate possible damage from modified clients. We should be able to identify fake LL results without requiring triple-or-more-checking. One new feature I want to add is to make sure the double-check is sent to a different userid and different team. We should also make sure that at least one LL result comes from a trusted prime95 client - this should be easy as 99% of results come from prime95/mprime executables built by me. |
|
|
|
|
#189 |
|
Oct 2002
Lost in the hills of Iowa
26·7 Posts |
I don't know if the current server does this or not - but a DC assignment should *never* be assigned to the same user that did the first time check - and I think it should never be assigned to a teammate, if teams are allowed to swap exponents around.
|
|
|
|
|
#190 | ||||
|
Jan 2003
Altitude>12,500 MSL
101 Posts |
Quote:
Quote:
Passwords in stats servers: No big deal? To me, the biggest risk having more servers in the picture is we might get sloppy with our maintenance procedures or server install/change configurations. If done right, however, I don't see much concern about stats servers having login passwords mirrored and used for detailed account viewing. You are right they would need to be protected within a trusted server, though. The v2-v4 systems have been like this for 5+ years, maybe not the best precedent but it has been ok so far. Security mechanisms cont. - I like the 32-bit key-per-assigned-exponent idea. Maybe we can do that for the clients, too ... I think someone else said that already - and revoke the access key if we think they are misbehaving? Quote:
Throttling open-source client transaction rates - George is right that I'm mostly concerned with runaway/maliciously-coded clients. How do we propose handling someone exceeding a throttle rate limit? If they are getting all the way to the main database to decide someone is ok or not, we are taking the transaction hit already. Here's an idea to whack at - What if throttling was managed by the web layer above the database? (Remember that the web layer can reside on the same server as the database, or on a different one.) The current v4 web layer authenticates the message hash signature of trusted clients. A v5 web layer could maintain a local, self-populated cache/table that tracked hit rates by IP & Unique Machine ID ("MUID"?), and respond by passing the transaction hit normally, ignoring it, revoking its client key (is that evil?) or replying with an error message - all without hitting the main database. The database would be self-maintaining in the sense that it records and gates access, but otherwise does not need loading or purging of data from the main v5 database. Actually I believe throttling can be router or firewall configured, too - any experts? JSP/Servlets technology - There are no requirements forcing a choice of Java over any other option. I've led a number of Java projects and believe me it's no cure-all as advertised. 1. First, the right design will make the choice of stateless http-to-database interface, JSP, ASP, CGI, ISAPI, etc., irrelevant anyway. 2. Migrating Pre-v23 Machines: My feeling is we would more likely use the existing PrimeNet code and tweak it to use the new database instead of rewriting that working legacy code in new JSP/servlets - or anything else for that matter. New efforts should focus upon the new protocol & design. 3. Manual Web Forms Pages - Maybe whomever did this sweet forum system should do the forms pages! :) In v4 we have a bunch of CGI Perl scripts invoking a special pnWebForm.exe CGI app to drive this, and from there RPCs into the main system. I'd be happy to see someone run this football downfield in any language... (hey, the Superbowl is in town here this weekend!) Nominal server load - Here's a scaling factor for our thinking: the current v4 system using 'slow' CGI sees between 2 to 3 transactions per second spiking to 10 per sec or so; it was stress tested at a rate of nearly 30x that rate continuously for several days. Quote:
Usually machines just go MIA past a reasonable waiting period, which is presently set in v4 to 90 days of inactivity, where activity is defined as any client transaction with the server. Sometimes we assume they are dead (and expire their exponents), and then they come back online and submit a test result and continue fine, but usually they don't come back. It would be good to know why machines 'die' and fall offline. Many - perhaps most? - are simply being decommissioned by upgrades to new PCs. Others have a change in network environment that blocks its access to the server. Some folks just quit. Check out this recent chart from the v4 server - the data is not completely correct owing to account transfers, etc., but is reasonably representative: http://scottkurowski.com/images/pnMachineAges.jpg There's a fairly uniform distribution of ages, except the first 4 months being risky or transfers. Not one is over 29 months old, at least as reckoned by machine ID. |
||||
|
|
|
|
#191 | |
|
"Richard B. Woods"
Aug 2002
Wisconsin USA
11110000011002 Posts |
(I would be quite thankful, sir, if you would kindly move that thing over to the right so it does not crimp my genuine Foamation cheesehat.)
(I would be quite thankful, sir, if you would kindly move that thing over to the right so it does not crimp my genuine Foamation cheesehat.) (I would be quite thankful, sir, if you would kindly move that thing over to the right so it does not crimp my genuine Foamation cheesehat.) (I would be quite thankful, sir, if you would kindly move that thing over to the right so it does not crimp my genuine Foamation cheesehat.) Quote:
My "slow" system has certainly been reporting results to Primenet longer than 29 months. I've never changed my original userid or its original machine ID, and it's been steadily submitting accepted results as recently as last week. |
|
|
|
|
|
#192 |
|
Jan 2003
Altitude>12,500 MSL
101 Posts |
I updated the image, sorry. The updated chart makes a lot more sense. The oldest still-running node is, not surprisingly, my 'challenge' account's GIMPS_NODE0 at 70 months - the 200Mhz PII that was my main PC when I wrote PrimeNet in 1997.
|
|
|
|
|
#193 |
|
Oct 2002
Lost in the hills of Iowa
26×7 Posts |
I wonder what the big jump in machines around the 48 month ago mark was?
|
|
|
|
|
#194 | |||||||||
|
Oct 2002
25 Posts |
Quote:
Quote:
Quote:
This basically raises more general question: what the passwords are used for. Currently they are for stats viewing only. With v5, they are going to also authentificate team affiliation changes. I think it's also reasonable to host 'newsletter subscription' stuff at server, too - embedding it with client sounds a bit strange (but I well understand why it did happen that way). And probably few other things? There are few actions I can think of that might require to keep extra information at stats servers (where we ought keeping only information that's really needed, any way). For example, email addresses. That's is not really needed to show stats. But it will be needed for the usual function of mailing a new random password instead of forgotten one. Maybe, it will be desirable to use 2 passwords per user, one for browsing stats, and another for tweakiing different things. Let me remind about the idea I expressed while ago: use 3 kind of servers/modules, not 2. The additional server(servers, web application) is for tweaking core database (like password changes), thus stats servers are leave only with stats processing and displaying work and nothing else. I'm still not sure which structure is better for a partiuclar task; we need to describe in details all operations/actions provided for GIMPS clients prior making a decision. Quote:
Tho... if the exponent key will allow to avoid an extra password, it might be a worthy thing to use. Quote:
There is another approach possible: each assigned expoenent might be provided with a digital signature (with exponent value, and test number (0 for initial LL, 1 for doublecheck, -56 for fastoring up to 56 bits, etc) signed). But: the private key is used only when exponents are added to the database/marked for additional testing, thus it's possible to avoid keeping the key at all servers. And using a public key, any core server will be and to easily check if the returned exponent AND the work done match the signature (this sounds very useful). This check is just CPU-bound and cause no disk I/O, thus is very robust against DoS attacks. (I should also mention that this does not involve using cryptographics modules within GIMPS clients - the signature is not more harming than browser cookie; thus no problems with copyrights and laws in certain countries). Quote:
boolean isok (InetAddress ia); that returns true is the request should be served or false if the requestor should be sent to the forest. The class internally uses static object(s) to store state. Now you call the method at the beginning of each JSP or Servlet: <% if (!Throttler.isok(request.getRemoteAddress())) { out.println("ERROR 12345: you should go to the forest this time"); return; } %> And the rest of the JSP/Servlet code is not affected by the traffic throttling feature at all. What else can be used without sacrificing performance and portability? Quote:
BTW, one more CPU-bound addition: the interserver traffic is going to be highly compressable (as limited set of messages gets repeated again and again with just minor changes); together with the fact that there is no need for instant replication it would be wrong to miss the opportunity compressing the network traffic. This can be done by tunneling traffic over SSH; or piping via gzip; but I personally prefer using java.util.zip.* standard package (which is basically interface to libz) - I bet I need 30 seconds to implement transparent compression of replication traffic that way :) and then the code does not incur overhead like extra CPU context switches. Quote:
But the graph is interesting, thank you. |
|||||||||
|
|
|
|
#195 |
|
Nov 2002
43 Posts |
Couldn't you have sub-team queues and have members of the sub-team draw from that pool?
The "leader" of the team could manage the queue, and handle moving work around at that level. If anything spurious came through, they could handle it or re-assign it or something like that. Not every team would like that option, but the serious ones would probably be willing to police themselves a little. |
|
|
|
|
#196 |
|
Oct 2002
Lost in the hills of Iowa
26×7 Posts |
For point of reference - I figured out a week or so back that my Tiger dual-Athlon had a 50/50 chance of cracking a DES "56 bit" key in about a year - 2 years roughly to exaust the ENTIRE keyspace.
32 bit keys are NOT secure, by most current measures - though if the server limits the number of connects from a single IP to once a second or thereabouts, it should be secure enough. |
|
|
|
|
#197 | |
|
Aug 2002
111110012 Posts |
Quote:
Nathan |
|
|
|
|
|
#198 |
|
Jan 2003
2×32 Posts |
Is this project dead?
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Report of monitoring primenet server unavailability | Peter Nelson | PrimeNet | 13 | 2005-10-18 11:17 |
| Is Entropia in trouble? | ekugimps | PrimeNet | 1 | 2005-09-09 16:18 |
| mprime stalls if primenet server is unavailable :( | TheJudger | Software | 1 | 2005-04-02 17:08 |
| Primenet Server Oddity | xavion | PrimeNet | 28 | 2004-09-26 07:56 |
| PrimeNet server replacement | PrimeCruncher | PrimeNet | 10 | 2003-11-19 06:38 |