Son Goku
No lover of dogma
- Joined
- 14 Jun 2004
- Messages
- 1,980
OK, a few things:
- The electric bill for running a computer 24/7 (which I've been doing since about 1996) is rather negligable. The real electricity hog is the CRT. Simple solution, leave the computer running, but turn the monitor off when it's not in use. The dif between running my 19" CRT all the time, vs. just when I'm using it is about $10-$12 here in NM...
- Something might get more folders. One of the things facing people today (even peeps who run distributed computing projects), is that since SETI first introduced the concept many years ago (and then of course folding and some other well known one's followed), they've been springing up rather rapidly.
Even among the biological projects, and related here we have:
* Folding@home
* Predictor@home
* Rosseta@home (this is a new one that just came out of beta not that long ago)
Each of the above 3 projects will indicate why they feel their work is important, but all 3 indicate that all of these projects are in fact inter-related, as they all relate to protein folding, but their approach, as well as the specific questions they are asking, specific methodology to proceed with research varies slightly.
Then there are a myrdiad of different projects. One main reason I've run BOINC of late, one can time share between multiple ones. Won't matter much when Folding gets their BOINC client out there, but it's still in late alpha, or early beta, and when some have made mention of it before they've sorta asked people not to make a big indication of it yet. Folding's BOINC client is still a ways off, before they're ready to release it publically...
Also a reason to go multi-core, have more time to share among projects that themselves seem worthy of some crunch time...
- If we really wanted to get down to the nitty gritty of what would be fair, there would be a lot of variables. And number of machines wouldn't be the only one. True to form also, on the project page, do people really have to declare their work machines as existing at work? Hell I don't even remember if the folding page had different profiles for work or home, but one could declare a PC as being anywhere...
There's also:
* Dual core procs, vs uni-core procs. People with X2s would obviously have a leg up over people with uni-core procs. Reason being, a dual proc is essentially
* SMP systems, aka computers with more then 1 CPU
* clock rate differences (AKA someone's Pentium II 400 vs. someone's 2.4 GHz Athlon 64)
* but even more to the point, processor generations Obviously an Athlon 64 is going to fair better then an AXP, clock for clock. Heck, is a clock per clock comparison of AMD vs. Intel even fair. It's obvious that with the on-die memory controller and other such enhancements, the 2.2 GHz Athlon 64 is != to a 2.2 GHz Pentium 4 Northwood.
* Amount of L2 cache might even make some difference (but I'd have to see the stats on this, to see how much cache memory helps)
* RAM differences. Someone with 256 MB in a win98 machine, isn't going to be able to request one of them extra large 600+ point WUs...
* Different operating systems might give slight differences
* More CPUs might not mean so much, if it's people's old 486s and Pentiums (well whatever the lowest level processor the folding client one can run it on) that they are throwing into the mix for a little extra crunch time.
At least though Folding doesn't add the measure of variation, where for instance BOINC isn't even restricted to the PC platform, but (being open source) can be compiled to anything, from an x86, to a Power PC (with OS X for instance), to a DEC Alpha, to a Sun UltraSPARC, to just about anything. People have even asked if they could write distrubuted computing projects in shader language, to use one's graphics cards for extra crunch time :laugh:
One could get down into the nitty gritty of every single detail in a person's PC, known to man. Does one really want to get this far, and if one does, is it a competition anymore? Heck even computer usage will effect this, if one wants to get down to it, as time the proc spends executing one's game, is time it wouldn't be folding.
Or one could ignore this, and of course the person who can put it on 1,000 high end office machines would likely win. But however one counts it, there could be room for some who are more comp challenged to complain "but he has an X2 and I only have 1 proc or something, why can he count that as one box"?, etc And IMO, there could come a point where the hairs are being split mighty thin...
- The electric bill for running a computer 24/7 (which I've been doing since about 1996) is rather negligable. The real electricity hog is the CRT. Simple solution, leave the computer running, but turn the monitor off when it's not in use. The dif between running my 19" CRT all the time, vs. just when I'm using it is about $10-$12 here in NM...
- Something might get more folders. One of the things facing people today (even peeps who run distributed computing projects), is that since SETI first introduced the concept many years ago (and then of course folding and some other well known one's followed), they've been springing up rather rapidly.
Even among the biological projects, and related here we have:
* Folding@home
* Predictor@home
* Rosseta@home (this is a new one that just came out of beta not that long ago)
Each of the above 3 projects will indicate why they feel their work is important, but all 3 indicate that all of these projects are in fact inter-related, as they all relate to protein folding, but their approach, as well as the specific questions they are asking, specific methodology to proceed with research varies slightly.
Then there are a myrdiad of different projects. One main reason I've run BOINC of late, one can time share between multiple ones. Won't matter much when Folding gets their BOINC client out there, but it's still in late alpha, or early beta, and when some have made mention of it before they've sorta asked people not to make a big indication of it yet. Folding's BOINC client is still a ways off, before they're ready to release it publically...
Also a reason to go multi-core, have more time to share among projects that themselves seem worthy of some crunch time...
- If we really wanted to get down to the nitty gritty of what would be fair, there would be a lot of variables. And number of machines wouldn't be the only one. True to form also, on the project page, do people really have to declare their work machines as existing at work? Hell I don't even remember if the folding page had different profiles for work or home, but one could declare a PC as being anywhere...
There's also:
* Dual core procs, vs uni-core procs. People with X2s would obviously have a leg up over people with uni-core procs. Reason being, a dual proc is essentially
* SMP systems, aka computers with more then 1 CPU
* clock rate differences (AKA someone's Pentium II 400 vs. someone's 2.4 GHz Athlon 64)
* but even more to the point, processor generations Obviously an Athlon 64 is going to fair better then an AXP, clock for clock. Heck, is a clock per clock comparison of AMD vs. Intel even fair. It's obvious that with the on-die memory controller and other such enhancements, the 2.2 GHz Athlon 64 is != to a 2.2 GHz Pentium 4 Northwood.
* Amount of L2 cache might even make some difference (but I'd have to see the stats on this, to see how much cache memory helps)
* RAM differences. Someone with 256 MB in a win98 machine, isn't going to be able to request one of them extra large 600+ point WUs...
* Different operating systems might give slight differences
* More CPUs might not mean so much, if it's people's old 486s and Pentiums (well whatever the lowest level processor the folding client one can run it on) that they are throwing into the mix for a little extra crunch time.
At least though Folding doesn't add the measure of variation, where for instance BOINC isn't even restricted to the PC platform, but (being open source) can be compiled to anything, from an x86, to a Power PC (with OS X for instance), to a DEC Alpha, to a Sun UltraSPARC, to just about anything. People have even asked if they could write distrubuted computing projects in shader language, to use one's graphics cards for extra crunch time :laugh:
One could get down into the nitty gritty of every single detail in a person's PC, known to man. Does one really want to get this far, and if one does, is it a competition anymore? Heck even computer usage will effect this, if one wants to get down to it, as time the proc spends executing one's game, is time it wouldn't be folding.
Or one could ignore this, and of course the person who can put it on 1,000 high end office machines would likely win. But however one counts it, there could be room for some who are more comp challenged to complain "but he has an X2 and I only have 1 proc or something, why can he count that as one box"?, etc And IMO, there could come a point where the hairs are being split mighty thin...
Last edited: