Message boards : Projects : Separated stats for CPUs and GPUs
Message board moderation
Author | Message |
---|---|
Send message Joined: 17 Dec 08 Posts: 2 |
Hi, I'm not sure this is the right place to ask this, so please redirect me if it isn't. Milkyway@home currently makes BOINC statistics ridiculous. Using optimized clients built by the community, it is possible to achieve 60,000 credits a day with a single ATI Radeon HD 4850. The fastest ATI cards reach 100,000 a day. This lets other projects become unalluring (at least for those that pay more attention to the credits than to the science). So please do anything about this problem. An idea might be to have separate statistics for CPUs and GPUs. But even then there would be a problem because Milkyway gives much more credits than e. g. GPUGRID. |
Send message Joined: 30 Mar 09 Posts: 6 |
I'm afraid that I'm going to have to disagree with you. Let me present the opposing viewpoint. Sure, optimized apps or (especially) GPU apps make non-GPU projects less appealing. But, in my opinion, they should. Leaving aside people who see BOINC as some kind of contest, I assume that most of us are here to contribute to some sort of research. Credit is measure of how much work you're contributing. With optimized or GPU apps, the time/capacity/electricity/money you or I contribute is getting more bang for the buck. If I give money to a charitable organization, I'd like to know that my money is being used efficiently and effectively. Same thing here -- I'd prefer that the computing hours I donate are being used effectively. Optimized apps and GPU apps don't just generate more credits; they do a lot more work. In the case of GPU apps, they do many times as much work as an equivalent CPU app. From what I'm seeing on various projects, GPU apps are doing about 10 to 100 times as much processing, depending on the CPU and GPU. Here's an example. CPDN runs huge CPU workunits that can generate up to 25,000 credits per WU and take up to 100 days of CPU time even on a fast CPU. If another (fictitious) similar climate project appeared that was written to use GPUs instead, and could do the same work in 5 days, which project would be making better use of your equipment, assuming you could run the GPU application? The newer GPU project would be doing 20 times the work as CPDN. Why shouldn't it be serving up 20 times the credits? All other things being equal, it's a more worthy project to which to donate my resources. Now, if you're saying that MW is serving up *excessive* credits for the work being done, that's different, and I would agree with you. But if your point is simply that projects that do more work due to smarter coding (optimized apps) or by utilizing super-computer-like hardware (GPUs), I have to disagree. More work is getting done in those circumstances, and credit is (theoretically) allocated based upon work being done. If a particular WU generates X credits, it should do so regardless of whether it takes a month to crunch on one machine or a minute to crunch on another machine. Yes, I know there's a lot of grey area here, but this is my opinion. Which, of course, is worth no more nor no less than anyone else's opinion. Mike |
Send message Joined: 19 Jan 07 Posts: 1179 |
If the optimized app had been there since the beginning, it would have credits matching other projects, and non-optimized app would give less. If they start with an app with horribly bad performance, get credits matching, and *then* optimize it, credits go through the roof. It makes no sense at all. |
Send message Joined: 16 Nov 08 Posts: 28 |
Now, if you're saying that MW is serving up *excessive* credits for the work being done, that's different, and I would agree with you. Hi Michael! I wonder what that statement is based on? flop/credit? Tomas |
Copyright © 2025 University of California.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License,
Version 1.2 or any later version published by the Free Software Foundation.