• Welcome to Overclockers Forums! Join us to reply in threads, receive reduced ads, and to customize your site experience!

Feedback Requested by Dr. Pande

Overclockers is supported by our readers. When you click a link to make a purchase, we may earn a commission. Learn More.

ChasR

Senior Member
Joined
Apr 12, 2004
Location
Atlanta
Vijay Pande said:
How are things going? Feedback requested.

Postby VijayPande » 01 Nov 2011, 20:40
We've been pounding in several areas to try to greatly improve the donor experience, especially in terms of the client and the server. The client changes are more obvious, but the WS backend is also behaving (in general) a lot better these days I think.

I was curious to hear what DAB and Mods thought. What are the pressing issues from the donor point of view these days? I ask since we can see some light at the end of the tunnel for the v7 client to be released and so it's a good time for me to start planning long range thinking. So, it's useful to get feedback from others to incorporate into that planning.

While I can't promise that we'll do all of your suggestions, it's good for us at least to know what's on donors' minds and what's their concerns, and try to to do the best we can with the resources we have to address them. In general, people on the forum seem pretty happy with FAH, but I'm curious to see how we can improve things further.

Generally, I think things are going fairly well. There are 0 controversial issues or even any issues of late on our forum (the latter may not be a good thing).

My own feedback would, as usual, be a gripe about the huge disparity in the points system. Uniprocessor and PS3 WUs are awarded so few points relative to bigadv and especially 12 core bigadv that interest in both has waned. Interest in the GPU client is waning. Talk on the forum is centered CPUs folding SMP and -bigadv SMP. Not many posts about "which GPU should I buy?". THe QRB has an inordinate # of people delaying purchases waiting for the next hardware generation because an evolutionary improvement gives revolutionary improvements in ppd.

SMP PPD for a given machine tends to creep upward. When the change to the i5 benchmark machine was made, WU values were normalized so that a q6600 @ 3.0 would make about 4800 ppd. Now they are making up to 10,000 ppd. p6900 makes a lot more ppd than A2 bigadv WUs on the same hardware. Continual points inflation devalues all prior work.

Lack of a monitoring application keeps power folders from adopting an otherwise good v7 client.
 
Lack of a monitoring application keeps power folders from adopting an otherwise good v7 client.

Has anyone come up with anything yet? Let me rephrase that. Has anyone come up with anything yet that is stable and usable for the donors at large?

I'm not aware of anything save the possibility of FCI which is the system written by smoking2000. It uses a local daemon approach so each system that needs to be monitored must have yet another program installed on it to provide data. This design allows for more data gathering but also at the cost of more work setting things up. I've never used it but I've checked out smoking2000's server where it lives. Pretty cool system but don't know if it has v7 capabilities yet or not. Smoking2000 has definitely built an interface to v7, as have I, but I'm not sure if it's hooked into FCI at this time. If it's not, he's in the same boat I am.

Others... FahMon, FahSpy, FAH GPU Tracker... none appear to have any progress made towards v7, if that's even on the radar.

I've said it before and I'll say it again... and you can take this to the bank... I'm working on it! :) The 'I was up till midnight last night' kinda working on it. The 'I'll be up late tonight' kinda working on it; and I'm getting close to the point where the salvage ends and the portions that need redesigned begin.

Other good things too... did someone say Plugins!?!? Not that anyone is going to take advantage of it... but it's fun for me. This is a complete departure from the FahMon plugin I built to read FahMon client config files. The enhancements will certainly be able to still do this and much, much more.
 
...and if you're looking for gripes to deliver to Dr. Pande I have but just one at the moment.

P6099 - this thing makes my i7s look like toys in the PPD department. This is from a 2600k @ 4.2GHz.

TPF - 00:06:36 / PPD - 6,062.1 (2.18 WUs)
 
Somethings got to be wrong, that's worse than a Q6600 on that WU. I posted a bunch of results from my C2Qs in the "Would an ISP..." thread. Up to 12,700 ppd on a q9450 @ 3.6 running a GTX295 with two folding instances.
 
I'll check things out more closely later today. Sorry, didn't mean to turn this into troubleshooting but it looks like I have some trouble that needs shot.

All my i7s are running Ubuntu 10.10 in a VM with Win7 x64 host. Seems to run A5 work and other A3 work just fine. I do believe this VM has an ext4 file system. Here's a log excerpt from the same VM clone showing the end of a P6900 WU. I don't see the "pause" everyone was talking about at the end of the WU so I have just left well enough alone. However, could the ext4 be effecting P6099?

Oh, and that horrible time was from one of my 2600k rigs... here's one of my 920 rigs @ 3.8GHz. See, not just one rig. That makes me suspect the VM itself.

TPF - 00:10:03 / PPD - 3,235.0 (1.43 WUs)

Code:
[17:54:11] Completed 237500 out of 250000 steps  (95%)
[18:27:21] Completed 240000 out of 250000 steps  (96%)
[19:00:20] Completed 242500 out of 250000 steps  (97%)
[19:33:27] Completed 245000 out of 250000 steps  (98%)
[20:06:31] Completed 247500 out of 250000 steps  (99%)
[20:39:02] Completed 250000 out of 250000 steps  (100%)
[20:39:11] DynamicWrapper: Finished Work Unit: sleep=10000
[20:39:21] 
[20:39:21] Finished Work Unit:
[20:39:21] - Reading up to 52713120 from "work/wudata_01.trr": Read 52713120
[20:39:21] trr file hash check passed.
[20:39:21] - Reading up to 47067092 from "work/wudata_01.xtc": Read 47067092
[20:39:22] xtc file hash check passed.
[20:39:22] edr file hash check passed.
[20:39:22] logfile size: 195570
[20:39:22] Leaving Run
[20:39:27] - Writing 100145730 bytes of core data to disk...
[20:39:30]   ... Done.
[20:39:50] - Shutting down core
[20:39:50] 
[20:39:50] Folding@home Core Shutdown: FINISHED_UNIT
[20:39:52] CoreStatus = 64 (100)
[20:39:52] Unit 1 finished with 62 percent of time to deadline remaining.
[20:39:52] Updated performance fraction: 0.843096
[20:39:52] Sending work to server
[20:39:52] Project: 6900 (Run 30, Clone 14, Gen 73)


[20:39:52] + Attempting to send results [November 7 20:39:52 UTC]
[20:39:52] - Reading file work/wuresults_01.dat from core
[20:39:52]   (Read 100145730 bytes from disk)
[20:39:52] Connecting to http://130.237.232.141:8080/
[20:47:06] Posted data.
[20:47:06] Initial: 0000; - Uploaded at ~225 kB/s
[20:47:06] - Averaged speed for that direction ~212 kB/s
[20:47:06] + Results successfully sent
[20:47:06] Thank you for your contribution to Folding@Home.
[20:47:06] + Number of Units Completed: 70

[20:47:12] Trying to send all finished work units
[20:47:12] + No unsent completed units remaining.
[20:47:12] - Preparing to get new work unit...
[20:47:12] Cleaning up work directory
[20:47:13] + Attempting to get work packet
[20:47:13] Passkey found
[20:47:13] - Will indicate memory of 3017 MB
[20:47:13] - Connecting to assignment server
[20:47:13] Connecting to http://assign.stanford.edu:8080/
[20:47:13] Posted data.
[20:47:13] Initial: 8F80; - Successful: assigned to (128.143.199.97).
[20:47:13] + News From Folding@Home: Welcome to Folding@Home
[20:47:13] Loaded queue successfully.
[20:47:13] Sent data
[20:47:13] Connecting to http://128.143.199.97:8080/
[20:47:14] Posted data.
[20:47:14] Initial: 0000; - Receiving payload (expected size: 1765932)
[20:47:16] - Downloaded at ~862 kB/s
[20:47:16] - Averaged speed for that direction ~738 kB/s
[20:47:16] + Received work.
[20:47:16] Trying to send all finished work units
[20:47:16] + No unsent completed units remaining.
[20:47:16] + Closed connections
[20:47:16] 
[20:47:16] + Processing work unit
[20:47:16] Core required: FahCore_a3.exe
[20:47:16] Core found.
[20:47:16] Working on queue slot 02 [November 7 20:47:16 UTC]
[20:47:16] + Working ...
[20:47:16] - Calling './FahCore_a3.exe -dir work/ -nice 19 -suffix 02 -np 8 -checkpoint 30 -verbose -lifeline 2048 -version 634'
 
ext4 delays don't occur in VMware or on alongside Windows installations. I can't tell you why, just that it is so.

I must ask you, the creator of HFM, what are you using to calculate that ppd? 10:03/frame on p6099 is 16,877.6 ppd.
 
HFM... and most likely incorrect data on p6099. :D Guess I need to do a manual refresh on the project data. :chair:

See... even the 'creator' can make a rookie mistake with his own system. I'll post back and let you know what I have in the data. One cool thing I've already got worked out for the next version is the ability to detect changes when loading new project data from the psummary. So the user will now see a report about what has and has not changed.
 
Yeah, this will screw up the calculation. So, never mind... back to your regularly scheduled thread. :)

Code:
p6099

Credit - 529
KFactor - 3.19
Preferred Days- 2.6
Maximum Days - 4
 
Anything new on v7 & HFM?

Another gripe is EVGA bribing people to join their now juggernaut team with up to $10/mo store credits is very unfair to the other fully volunteer teams.
 
Version 7 is getting close to a major new public release. It includes log filtering and working remote access. It is almost automatic in setup and easily reconfigured if you don't like the automatic setup. I endorse the current public beta version to noobs now.

Harlam will have to fill you in on the v7 interface with HFM. He may be a bit distracted with his 4P build. With his 4 M-C 6174s, I expect him to outproduce all the current hardware in my sig, with the one 4p machine. Somewhere around 500,000 ppd. Talk about unfair. ;)
 
4P or bust. Even an SR3 can only get up to 12 actual cores :(

'bout the time I finally get a 4P, they'll raise the minimum again LOL
 
Actual cores don't matter, logical core count qualifies. SR-3 will be a good producer.
 
Anything new on v7 & HFM?

I've had a version running on a secondary box for a little while now. There are still some kinks but it seems stable (i.e. no crashes). ChasR is right... have been distracted with my 4P build but I plan on working on HFM today... maybe have something into testers hands today too. We'll see how it goes.
 
Back