Forums :
General Topics :
Some useful BOINC things
Message board moderation
Author | Message |
---|---|
poppinfresh99 Send message Joined: 1 Mar 22 Posts: 21 Credit: 1,234,195 RAC: 0 |
I just joined this project. I've never done BOINC multithreading (mt) tasks before! Some useful BOINC things regarding this project that I figured out... (1) the number of cores that a task will use is set at DOWNLOAD time, and is determined by the BOINC "% CPUs used" computing setting (also determined by this project's "max # of CPUs" setting). (2) on a hyper-threading CPU, you should use the number of PHYSICAL cores (not logical cores). This is how parallel computing on a CPU works. A test I did confirmed this: all tasks have the same work content, but the one using all my logical cores (4 cores) finished only VERY SLIGHTLY faster than the one using all my physical cores (2 cores). However, if you care about credit more than you should, you can double the credit you get by setting the cores to all your logical cores because credit is proportional to the number of cores (and runtime). See runtime-based credit here... https://boinc.berkeley.edu/trac/wiki/CreditOptions (3) on my 64-bit Intel macOS, this uses almost 3 GB of RAM (regardless of cores), and all tasks use VirtualBox (vbox) on my macOS! (4) Tasks don't have checkpoints, which is okay because they are short. Strangely, the elapsed time doesn't reset if the task is reset (if "leave tasks in memory while suspended" setting is NOT set), which causes you to get more credit because credit is based on runtime! (5) If you abort the task, the workunit fails (no resends). No pressure! I only kinda care about the "hyper-threading credit bug" I mentioned, but I am worried about workunits failing if/when I abort tasks!!? Hope this helps. If I got anything wrong, let me know! If you know more about the bugs I found, let me know! |
poppinfresh99 Send message Joined: 1 Mar 22 Posts: 21 Credit: 1,234,195 RAC: 0 |
When doing parallel computing on a CPU, hyper-threading doesn't really help because all the threads are trying to do the same sort of things at the same time. The idea of hyper-threading is to allow a thread to use DIFFERENT parts of the CPU, so I set this project's "Max # CPUs" setting to 2 (my physical cores). But, as an experiment, I set my BOINC's "% of the CPUs" computing setting to 100% (my 4 logical cores) so that it would run two 2-core tasks at a time. This increased the rate of work SLIGHTLY compared to running a single 4-core task (runtimes were a bit less than double the 4-core runtimes). It also increased my BOINC RAM usage to 6 of 8 GB (though no swap used)... This all makes sense. However, the weirdest thing is that I was getting about 42% more credit than expected based on just runtime! See the table below. The times and credit for the "2+2 cores" entry below are for just 1 of the 2 tasks being run, but the two final columns describe both tasks. cores runtime CPUtime credit work (total credit)/runtime 2 854.71 1,576.41 30.67 1 0.036 4 770.4 2,420.08 53.85 1 0.070 2+2 1,369 2,285.77 69.93 2 0.102 Even if doing 2+2 cores, I conclude that you don't really increase your work rate. I also conclude that, if you want to REALLY cheat to get credits, use a CPU that supports hyper-threading, then have 2 tasks running with each using your number of physical cores. |
Jim1348 Send message Joined: 17 Nov 14 Posts: 136 Credit: 5,413,463 RAC: 0 |
(2) on a hyper-threading CPU, you should use the number of PHYSICAL cores (not logical cores). This is how parallel computing on a CPU works. A test I did confirmed this: all tasks have the same work content, but the one using all my logical cores (4 cores) finished only VERY SLIGHTLY faster than the one using all my physical cores (2 cores). (1) On all my machines, I see an expected reduction in run time by increasing the number of virtual cores. On this, as on most BOINC projects, the use of two virtual cores increases the output (decreases the run time) by about 30% to 40% as compared to a full core. (2) BOINC reports virtual cores as "cores", since that is what they look like to the OS. I think the terminology is appropriate. |
poppinfresh99 Send message Joined: 1 Mar 22 Posts: 21 Credit: 1,234,195 RAC: 0 |
Interesting. I got a 10% reduction when running all 4 of my logical cores. If running 2+2 cores (2 tasks each using 2 cores), I get an effective 20% reduction in time. My CPU is an I5-3210M. Perhaps the "M" at the end has something to do with my crappier performance? |
Jim1348 Send message Joined: 17 Nov 14 Posts: 136 Credit: 5,413,463 RAC: 0 |
Interesting. I got a 10% reduction when running all 4 of my logical cores. If running 2+2 cores (2 tasks each using 2 cores), I get an effective 20% reduction in time. I ran only Ryzen 3000's (usually 3600) on Cosmology. They have a lot of cache, which helps. I don't know much about the mobile chips, but I expect that they worry a lot more about thermal management. I expect some power control program causes them to downclock when too many cores operate. |
poppinfresh99 Send message Joined: 1 Mar 22 Posts: 21 Credit: 1,234,195 RAC: 0 |
Thanks Jim! If you have enough RAM, here is a way for anybody with multiple cores to cheat to get many more credits. I'm posting this at the bottom of thread as I feel it would be a faux pas to make it its own thread. We can exploit a bug: credits are based on the number-of-cores that the task was ORIGINALLY sent with (the "Device peak FLOPS" is only set when task was downloaded). Though the TRUE bug is credits per task not being a fixed amount! Combined with the credit bug previously mentioned in this thread, this can do a lot! Steps... - Do not set a max # CPUs at this project's settings. Also Resource Share cannot be set to 0. You'll keep these settings the whole time. - Set computing settings to run 100% CPUs - Download a BUNCH of tasks (many days of work). These are locked in forever to get credit as if running your max cores. - Change the "100% CPUs" setting to something to get 1 or 2 cores or, if you don't have enough RAM, some other larger factor of your total cores. This changes nothing until the next step. - Download a couple more tasks (you'll have to increase your "days of work" setting a bit). This resets the amount of cores that BOINC thinks the tasks from this project are running. - Set "no new tasks". If you really want, abort the new tasks that were most recently downloaded. Due to this project's settings, aborting a task causing the workunit to fail, but that's not's our fault! - If you want and have the RAM to run more tasks, set back to "100% CPUs" or whatever! You could have also done something similar this whole time by changing the "max # CPUs" project setting the whole time while leaving the "100% CPUs" set. - When you processed all tasks, repeat When you download the final few tasks, BOINC will update the previously downloaded tasks to use the reduced number of CPU cores. The GUI of the BOINC Manager may not fully show you that they've changed until you fully restart the BOINC client, but they have been reduced for all tasks. The task that was running during the final download will keep trying to use max cores, but all the other tasks will try to use the reduced CPU cores. However, the tasks will get the number-of-cores credit multiplier that they were ORIGINALLY sent with! There is a related bug: even if you reset the RAM of an already-started task (have it start over for example), it doesn't always try to use the number of cores that BOINC thinks it is using. Instead, the task always tries to use the number of cores that it was using when it first started running. This can cause 1 task to use all your cores while BOINC runs other tasks that try to use even more cores. This might be related to the bug where elapsed time does not reset when a task resets? |
Jim1348 Send message Joined: 17 Nov 14 Posts: 136 Credit: 5,413,463 RAC: 0 |
Thanks Jim! I don't even look at credits, and it is unfortunate that you have used anything I said for that. |
poppinfresh99 Send message Joined: 1 Mar 22 Posts: 21 Credit: 1,234,195 RAC: 0 |
What made you think I used anything you said? As I explained, I didn't want to start a new thread. I also don't care enough about credits to regularly do the steps I gave. The science behind CMB anisotropy and cosmology in general are more interesting. I also find parallel computing interesting, so I was figuring out how BOINC managed multithreading. Though I feel that this project could easily fix a few bugs if they care to. I'm like a white hat hacker I guess... |
poppinfresh99 Send message Joined: 1 Mar 22 Posts: 21 Credit: 1,234,195 RAC: 0 |
I found the answer to one of my questions https://www.cosmologyathome.org/faq.php#does-it-hinder-cosmologyhome-if-i-abort-jobs We can freely abort jobs! Also, for everything in this thread, I've only been running the camb_boinc2docker application. |
Nflight Send message Joined: 4 Aug 07 Posts: 7 Credit: 1,691,791 RAC: 0 |
A new problem has risen, I am seeing this post in the 'Event Log' ! 3/6/2022 8:17:36 AM | Cosmology@Home | [error] exceeded limit of 800 slot directories Also in this project on the same computer, any one else seeing this? 3/6/2022 8:17:36 AM | Moo! Wrapper | [error] exceeded limit of 800 slot directories Been Crunching 21 years and this is new for me !! ![]() |
poppinfresh99 Send message Joined: 1 Mar 22 Posts: 21 Credit: 1,234,195 RAC: 0 |
In my BOINC data folder https://boinc.berkeley.edu/wiki/BOINC_Data_directory I never have more than several folders in the slots folder. The following link suggests updating (or changing at least) your BOINC version https://boinc.berkeley.edu/dev/forum_thread.php?id=10501 |
.clair. Send message Joined: 4 Nov 07 Posts: 655 Credit: 18,386,122 RAC: 69,917 ![]() |
A new problem has risen, I am seeing this post in the 'Event Log' ! One idea I had , you may have a build up of dead VMs Have a look in virtual box itself for zombie virtual machines that have become unreachable click on the desktop icon for virtual box , green is ok , red is dead , and delete them . |
poppinfresh99 Send message Joined: 1 Mar 22 Posts: 21 Credit: 1,234,195 RAC: 0 |
*If* this project wants to give a fixed amount of credit, it's very easy. Add a single line wu.canonical_credit = 25;to the work generator's CPP file and recompile it. Then add the --credit_from_wu option to the validator in config.xml Then run... bin/stop bin/start By the way, setting up a *local* BOINC server on Linux (Ubuntu) is relatively easy and lets me play around with BOINC. Via your router, just give your computer (I used an old laptop) a local IP address such as 192.168.0.201 Then follow the following instructions... https://boinc.berkeley.edu/trac/wiki/ServerIntro I ignored the Docker and boinc-server-maker stuff (the Docker thing is written by this project's administrator lol). The trick is, when running make_project, add the following option --url_base http://192.168.0.201/(or whatever your static local IP is) If you follow the instructions, a test applications is installed, which you can modify and/or play around with! |
poppinfresh99 Send message Joined: 1 Mar 22 Posts: 21 Credit: 1,234,195 RAC: 0 |
I just came across this... https://boinc.berkeley.edu/trac/wiki/CreditNew The credit system for BOINC is VERY complicated, but it seems that each computer should be getting the same credit per task on average (I get around 50 credits per task), so, if everyone else is getting around 50 credits per task, maybe this project doesn't need to give fixed credit per task. |
poppinfresh99 Send message Joined: 1 Mar 22 Posts: 21 Credit: 1,234,195 RAC: 0 |
Until now, I only ran this project on macOS. Today, I ran camb_boinc2docker on Windows on the same I5-3210M CPU that macOS was running on. Unlike my macOS, tasks on Windows... - take about 33% more time - use very little RAM! No more than 40 MB! This is nothing compared to the 3 GB for macOS - 4-thread tasks consistently result in "VM job unmanageable" on the 2-physical-core (4 logical core) CPU. 3-thread tasks sometimes work on Windows, and 2-thread tasks sometimes don't work. But 4-thread always worked on macOS. This might be due to all the bloat in Windows OS that uses up lots of resources?? - tasks have checkpoints! Oh wait, no they don't. They just look like they have a checkpoint because progress is made for a few seconds after restarting the task, but it then resets back to 0%. It seems that the initial tasks on a new fast computer get around 25 credits, then host normalization brings it to around 50 credits over time??? |