Do workspace CPUs have multiple cores?



I’m using Cloud9 as a testing environment for some scripts. An important aspect of their final performance is being multi-threaded and using multiple cores?

Problem solved. Yes, there are eight logical cores.


Hi @cardeme, I’m just curious: how did you figure it out? Thanks!


One way to figure out how many cores cloud9 holds is by executing this go code:

package main;
import "fmt"
func init(){
func main(){



Another easier way is to just open up htop. :slight_smile:


Those cores are virtual, aren’t they? Meaning you share a physical core with other instances and you get certain CPU cycles assigned, right?


4 physical, 2 logical cores per physical one. 8 threads total. The machines run on 2 dual core xeon processors.


@b2m9 is spot on;

In anything except a physical machine you cannot depend on what the system reports as the available cores.

In our case we’re using the Linux kernel’s CPU cgroups to handle CPU utilization between instances. So htop, /proc/cpuinfo, etc will show the number of cores of the host machine, but what you’re able to use will depend on many factors, such as whether or not you’re a free or premium user and how much CPU shares you’ve used over the past X minutes as a rolling window amongst other metrics we check.

So you do get to run multiple processes in parallel, but the speed at which they run will vary.


Thanks @justin8 for your reply.

I have a question regarding virtual CPUs - forgive me if it’s trivial, but I’m not sure how CPU cycles are assigned among running instances.
If I benchmark algorithms (example: Dijkstra), would I get comparable results or would it depend too much on the CPU cycles? Or are those cycles evenly distributed over time, so my results would give a rough but realistic picture?



Well we allow bursting of CPU usage as well so that things are more performant for users; so I wouldn’t use it for benchmarking as it would be too variable to be worthwhile. But it would entirely depend on what you’re testing, how much CPU it would require, how much you used in the past X, Y and Z time frames, etc. Short bursty tasks should be pretty much the same all the time though, so it really depends on your use case.

For development tasks however, like running up a dev instance of a service or something, it’s not going to change much. But if you were running something highly CPU intensive for the past 20 minutes and then go to run a benchmark your results would be lower since you wouldn’t be able to burst CPU again for a short time.

If you want to know more about how cgroups and CPU shares work, Redhat has excellent documentation on it:


Thanks! This is very helpful.