Message boards : BOINC client : NVIDIA CUDA
Message board moderation
Author | Message |
---|---|
Send message Joined: 23 Jan 06 Posts: 2 ![]() |
http://developer.nvidia.com/object/cuda.html http://www.nvidia.com/object/IO_37226.html Nvidia has developed a way to put its graphic processors to better use, as massive parallel processors, or something like this, as far I can understand. Is this information correct? Did I understand it right? If I did understand, will BOINC take advantage of this and release a specific client version that will be able to execute from a graphics card, instead of a plain CPU? Will I be able to offload my CPU, and put some strain on a graphics processor instead? How fast will it be, compared to recent home-consumer processors? Will I have to buy a specific graphic card, or can someone rewrite this thing in order to run it in any reasonably modern graphic processor, not necessarily NVIDIA`s? I hope someone can shed some light into this... |
Send message Joined: 19 Jan 07 Posts: 1179 ![]() |
It would need rewriting of the science application, not a BOINC modification, and it wouldn't necessarily be faster than a CPU, depends on each case. Science apps can use the GPU with current BOINC versions with no problem. |
Send message Joined: 29 Mar 07 Posts: 3 |
It would need rewriting of the science application, not a BOINC modification, and it wouldn't necessarily be faster than a CPU, depends on each case. This was interesting! As soon as I read about Nvidia CUDA I thought of BOINC! An affordable computer with one or two fast graphic cards and software adapted to CUDA would be very fast in different projects such as Boinc.. But what you is saying is that BOINC already is taking full advantage of todays fast graphic cards? I have quite an old computer that doesn't calculate very fast but by upgrading my graphics card, that would speed up things? I thought the most important parts in the computer for fast calc. of for example Boinc was the CPU and the amount of RAM? |
![]() Send message Joined: 29 Aug 05 Posts: 304 ![]() |
It would need rewriting of the science application, not a BOINC modification, and it wouldn't necessarily be faster than a CPU, depends on each case. Currently none of the BOINC project's science applications are set up to use the graphics card. The BOINC client has flags and hooks that make it possible, but none of the projects have taken advantage of them yet. Upgrading your graphics card is unlikely to help much in an older computer. To really be effective a science application needs the bi-directional bandwidth provided with the newer PCIx16 graphics interface. You are correct CPU speed, bandwidth to RAM and size of RAM are the most important things for fast calculations. BOINC WIKI ![]() ![]() BOINCing since 2002/12/8 |
Send message Joined: 9 Sep 05 Posts: 128 ![]() |
But what you is saying is that BOINC already is taking full advantage of todays fast graphic cards? In short: no. First of all you need to understand that the whole BOINC thing is really two things. First part is BOINC client itself. Functionality of BOINC client is to communicate with project servers (project as in SETI@Home, Einstein@Home, ...) and to run project application as needed/desired (eg. if user is participating in more than one project it will alternate between project papplications according to resource split). Second part is project application (some call it scientific application). Each project produces application that does scientific work according to their needs. The only requirement for project application is that it knows how to communicate with BOINC client. BTW, this communication is done inside user's computer and is not subject to firewall rules or whatever. BOINC client doesn't limit application from using any part of hardware available, which includes graphic card and its GPU. It is entirely up to project application to use up available hardware as much as possible. One example would be use of instructions available in more modern processors, such as all variants of vector instructions (SSE, SSE2, MMX in Intel/AMD, altivec in PowerPC). BOINC client doesn't need them or use them while nearly all of project applications make good use of it. Metod ... ![]() |
Send message Joined: 19 Jan 07 Posts: 1179 ![]() |
Not at all. BOINC doesn't do any calculation. All I meant to say is that IF there was a BOINC project that can take advantage of the graphics card, it doesn't need changes to BOINC. The project still has to do lots of work on the science application to use the card. In other words, BOINC doesn't take advantage of the graphics card, because it's BOINC doesn't compute anything. What I meant is that BOINC is already ready to work with projects that take advantage of it (none does yet as far as I know). For example, if a project uses only the GPU to compute and uses little CPU, it can report to the BOINC client how much computing it did (as BOINC can't measure "GPU time" :D), so that it gives credits accordingly. |
Send message Joined: 19 Jan 07 Posts: 1179 ![]() |
Second part is project application (some call it scientific application). Each project produces application that does scientific work according to their needs. The only requirement for project application is that it knows how to communicate with BOINC client. BTW, this communication is done inside user's computer and is not subject to firewall rules or whatever. BOINC client doesn't limit application from using any part of hardware available, which includes graphic card and its GPU. It is entirely up to project application to use up available hardware as much as possible. May I add that, apart from CPU, GPU and RAM, the science applications can also use network resources and disk space. DepSpid is a non-cpu-intensive project which uses your network extensively downloading websites, then uses your CPU to analyze dependencies between them. Also, downloading input files and uploading finished workunits on all projects could be taken as the project using your network resources. BOINC also has a way for projects to store files on your computer (only on its project directory, not anywhere on your disk), making a big distributed storage system. No project currently uses this mechanism. |
Send message Joined: 29 Mar 07 Posts: 3 |
Nicolas wrote: Not at all. BOINC doesn't do any calculation. All I meant to say is that IF there was a BOINC project that can take advantage of the graphics card, it doesn't need changes to BOINC. The project still has to do lots of work on the science application to use the card. Maybe I was unclear or expressed myself in the wrong way.. English isn't my native language.. What I meant, of course, was that it would be good if calculations could benefit from CUDA or other similar techniques! Hopefully those who program the projects can implement this (CUDA) if the benefits is as big as one might get the impression of.. Regards Roland |
Send message Joined: 19 Jan 07 Posts: 1179 ![]() |
Maybe I was unclear or expressed myself in the wrong way.. English isn't my native language.. You did express in the right way, all I said is that BOINC isn't responsible for the calculations, it's the project's problem to use or not CUDA if they can use it at all for their kind of calculations. |
Send message Joined: 29 Mar 07 Posts: 3 |
I read today about CUDA and it seems like it will be used in Folding@Home? Also is the Sony PS3 beeing used to do something useful;) http://folding.stanford.edu/FAQ-PS3.html Read more about it here: http://folding.stanford.edu/FAQ-ATI.html Maybe old news for you all but very interesting! Never thought I would buy a PS3 but maybe I have to reconcider? Regards Roland |
Send message Joined: 16 Aug 07 Posts: 1 |
Here is a benchmark of CUDA against the latest Intel processors QX6850 & E6850. http://www.hardware.fr/articles/678-7/nvidia-cuda-plus-pratique.html i'm sorry it's french language. "VMD" Visual Molecular Dynamics tool, has been optimised for CUDA, and the results are : Nvidia 8800 GTX is 20 times faster than a E6850 intel processor. CPUs are completly OWNED by the nvidia cards. billion atom evaluations per seconds ------------------------------------ QX6850 ------------ 2.6 E6850 -------------- 1.9 3 x 8800 GTX --- 98.2 2 x 8800 GTX --- 65.6 1 x 8800 GTX --- 32.8 the author : http://www.ks.uiuc.edu/~johns/ http://courses.ece.uiuc.edu/ece498/al/ |
Send message Joined: 8 Sep 07 Posts: 2 |
CUDA is great, and I would definately like if projects updated to use the vast power of GPU's. I have 2 8600gts in sli and leave my computer on 24x7 with no thermal issues. |
![]() ![]() Send message Joined: 28 Oct 07 Posts: 48 ![]() |
I could not get a CUDA version of VMD or NAMD for Windows. Any idea if that is in the works? |
![]() Send message Joined: 10 Aug 08 Posts: 18 ![]() |
25.10.2008 10:27:27||Starting BOINC client version 6.3.17 for windows_x86_64 ... 25.10.2008 10:27:27||CUDA devices found 25.10.2008 10:27:27||Coprocessor: GeForce 9800 GT (2) but i have 1st 9600GSO !!! 2nd 9800GT p.s. 6.3.14 the same |
![]() Send message Joined: 29 Aug 05 Posts: 15585 ![]() |
For the moment that's by design. As per [trac]changeset:15915[/trac]: "- client: if the host has two CUDA GPUs, they were being recorded as two COPROC structures of type CUDA. Unfortunately, the logic doesn't handle this correctly; it expects there to be a single structure with count==2. Change things to do this. Unfortunately this means that if the two GPUs are different, that difference will get lost. This is a design flaw, and would take some work to fix. " |
Copyright © 2025 University of California.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License,
Version 1.2 or any later version published by the Free Software Foundation.