<?xml version="1.0"?>
<?xml-stylesheet type="text/css" href="http://linux-vserver.at/skins/common/feed.css?303"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>http://linux-vserver.at/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Petzsch</id>
		<title>Linux-VServer - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="http://linux-vserver.at/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Petzsch"/>
		<link rel="alternate" type="text/html" href="http://linux-vserver.at/Special:Contributions/Petzsch"/>
		<updated>2026-04-09T23:42:49Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.20.2</generator>

	<entry>
		<id>http://linux-vserver.at/Memory_Limits</id>
		<title>Memory Limits</title>
		<link rel="alternate" type="text/html" href="http://linux-vserver.at/Memory_Limits"/>
				<updated>2010-05-27T20:18:14Z</updated>
		
		<summary type="html">&lt;p&gt;Petzsch: added note about cgroups&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Overview ==&lt;br /&gt;
A vserver kernel keeps track many resources used by each guest (context). Some of these relate to memory usage by the guest. You can place limits on these resources to prevent guests from using all the host memory and making the host unusable.&lt;br /&gt;
&lt;br /&gt;
Two resources are particularly important in this regard: &lt;br /&gt;
&lt;br /&gt;
* The '''Resident Set Size''' (&amp;lt;code&amp;gt;rss&amp;lt;/code&amp;gt;) is the amount of pages currently present in RAM.&lt;br /&gt;
* The '''Address Space''' (&amp;lt;code&amp;gt;as&amp;lt;/code&amp;gt;) is the total amount of memory (pages) mapped in all the processes in the context.&lt;br /&gt;
&lt;br /&gt;
Both are measured in '''pages''', which are 4 kB each on Intel machines (i386). So a value of 200000 means a limit of 800,000 kB, a little less than 800 MB.&lt;br /&gt;
&lt;br /&gt;
''To easily find out the page size on your host try this line and ignore the warnings''&lt;br /&gt;
&amp;lt;pre&amp;gt;echo 'int main () { printf (&amp;quot;%dKiB\n&amp;quot;, getpagesize ()/1024); return 0; }' | gcc -xc - -o getpagesize &amp;amp;&amp;amp; ./getpagesize&amp;lt;/pre&amp;gt;&lt;br /&gt;
Each resource has a '''soft''' and a '''hard limit'''.&lt;br /&gt;
&lt;br /&gt;
* If a guest exceeds the &amp;lt;code&amp;gt;rss&amp;lt;/code&amp;gt; hard limit, the kernel will invoke the Out-of-Memory (OOM) killer to kill some process in the guest.&lt;br /&gt;
* The &amp;lt;code&amp;gt;rss&amp;lt;/code&amp;gt; soft limit is shown inside the guest as the maximum available memory. If a guest exceeds the &amp;lt;code&amp;gt;rss&amp;lt;/code&amp;gt; soft limit, it will get an extra &amp;quot;bonus&amp;quot; for the OOM killer (proportional to the oversize).&lt;br /&gt;
* If a guest exceeds the &amp;lt;code&amp;gt;as&amp;lt;/code&amp;gt; hard limit, memory allocation attempts will return an error, but no process is killed.&lt;br /&gt;
* The &amp;lt;code&amp;gt;as&amp;lt;/code&amp;gt; soft limit is not in utilized until now. In the future it may be used to penalizing guests over that limit or it could be used to force swapping on them and such ...&lt;br /&gt;
&lt;br /&gt;
Bertl explained the difference between '''rss''' and '''as''' with the following example. If two processes share 100 MB of memory, then only 100 MB worth of virtual memory pages can be used at most, so the RSS use of the guest increases by 100 MB. However, two processes are using it, so the AS use increases by 200 MB. &lt;br /&gt;
&lt;br /&gt;
This makes me think that limiting AS is less useful than limiting RSS, since it doesn't directly reflect real, limited resources (RAM and swap) on the host, that deprive other virtual machines of those resources. Bertl says that AS limits can be used to give guests a &amp;quot;gentle&amp;quot; warning that they are running out of memory, but I don't know how much more gentle it is, or how to set it accurately. &lt;br /&gt;
&lt;br /&gt;
For example, 100 processes each mapping a 100 MB file would consume a total of 10 GB of address space (AS), but no more than 100 MB of resources on the host. But if you set the AS limit to 10 GB, then it will not stop one process from allocating 4 GB of RAM, which could kill the host or result in that process being killed by the OOM killer.&lt;br /&gt;
&lt;br /&gt;
this is posted by siddharth&lt;br /&gt;
&lt;br /&gt;
== Setting memory limits ==&lt;br /&gt;
'''Beginning with vs2.3.0.36.29 you should use cgroups to set memory limits. More information: [[util-vserver:Cgroups]]'''&lt;br /&gt;
&lt;br /&gt;
You can set the hard limit on a particular context, effective immediately, with this command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/usr/sbin/vlimit -c &amp;lt;xid&amp;gt; --&amp;lt;resource&amp;gt; &amp;lt;value&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;&amp;lt;xid&amp;gt;&amp;lt;/code&amp;gt; is the context ID of the guest, which you can determine with the &amp;lt;code&amp;gt;/usr/sbin/vserver-stat&amp;lt;/code&amp;gt; command.&lt;br /&gt;
&lt;br /&gt;
For example, if you want to change the '''rss''' hard limit for the vserver with &amp;lt;code&amp;gt;&amp;lt;xid&amp;gt;&amp;lt;/code&amp;gt; 49000, and limit it to 10,000 pages (40 MB), you could use this command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/usr/sbin/vlimit -c 49000 --rss 10000&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can change the soft limit instead by adding the &amp;lt;code&amp;gt;-S&amp;lt;/code&amp;gt; parameter.&lt;br /&gt;
&lt;br /&gt;
Changes made with the vlimit command are effective only until the vserver is stopped. To make permanent changes, write the value to this file:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
/etc/vservers/&amp;lt;name&amp;gt;/rlimits/&amp;lt;resource&amp;gt;.hard&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To set a soft limit, use the same file name with the &amp;lt;code&amp;gt;.soft&amp;lt;/code&amp;gt; extension. The &amp;lt;code&amp;gt;rlimits&amp;lt;/code&amp;gt; directory is not created by default, so you may need to create it yourself.&lt;br /&gt;
&lt;br /&gt;
If you omit the suffix after the &amp;lt;code&amp;gt;/etc/vservers/&amp;lt;name&amp;gt;/rlimits/rss&amp;lt;/code&amp;gt; file, the value will be set for both, the hard and soft limit.&lt;br /&gt;
&lt;br /&gt;
Changes to these files take effect only when the vserver is started. To make immediate and permanent changes to a running vserver, you need to run &amp;lt;code&amp;gt;vlimit&amp;lt;/code&amp;gt; '''and''' update the rlimits file.&lt;br /&gt;
&lt;br /&gt;
The safest setting, to prevent any guest from interfering with any other, is to set the total of all RSS hard limits (across all running guests) to be less than the total virtual memory (RAM and swap) on the host. It should be sufficiently less to leave room for processes running on the host, and some disk cache, perhaps 100 MB.&lt;br /&gt;
&lt;br /&gt;
However, this is very conservative, since it assumes the worst case where all guests are using the maximum amount of memory at one time. In practice, you can usually get away with contended resources, i.e. allowing guests to use more than this value.&lt;br /&gt;
&lt;br /&gt;
== Displaying current memory limits ==&lt;br /&gt;
To display the currently active RSS limits for a vserver execute the following command:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vlimit -c &amp;lt;xid&amp;gt; -a -d | grep RSS&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The above command will display a similar result, whereas the third value (5000) is the soft limit and the last reflects the current hard limit (10000).&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
RSS         N/A                  5000             10000&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Display memory limits within a vserver ==&lt;br /&gt;
Normally the &amp;lt;code&amp;gt;top&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;free&amp;lt;/code&amp;gt; command will display the RAM and Swap usage of the host while invoked within a vserver. To change this behavior, add the &amp;lt;code&amp;gt;VIRT_MEM&amp;lt;/code&amp;gt; [[Capabilities_and_Flags#Context_flags_.28cflags.29|context flag]] to your vserver configuration:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
echo &amp;quot;VIRT_MEM&amp;quot; &amp;gt;&amp;gt; /etc/vservers/&amp;lt;name&amp;gt;/flags&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
After a successful restart of the related vserver, the total available RAM will equal to the value of &amp;lt;code&amp;gt;rss.soft&amp;lt;/code&amp;gt; while the difference of &amp;lt;code&amp;gt;rss.hard&amp;lt;/code&amp;gt; - &amp;lt;code&amp;gt;rss.soft&amp;lt;/code&amp;gt; will be displayed as swap space.&lt;br /&gt;
&lt;br /&gt;
As an example, if you set the &amp;lt;code&amp;gt;rss.hard&amp;lt;/code&amp;gt; limit to 10'000 pages and the &amp;lt;code&amp;gt;rss.soft&amp;lt;/code&amp;gt; limit to 7'000 pages:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vlimit -c &amp;lt;xid&amp;gt; --rss 10000&lt;br /&gt;
vlimit -c &amp;lt;xid&amp;gt; -S --rss 7000&lt;br /&gt;
vlimit -c &amp;lt;xid&amp;gt; -a -d | grep RSS&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
RSS         N/A                 7000                10000&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;free&amp;lt;/code&amp;gt; will report 28'000 KB (7'000 pages) of total memory and 12'000 KB (10'000 - 7'000 pages) of Swap space (assuming that one page is 4 KB):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
free -k&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
             total       used       free     shared    buffers     cached&lt;br /&gt;
Mem:         28000       4396      23604          0          0          0&lt;br /&gt;
-/+ buffers/cache:       4396      23604&lt;br /&gt;
Swap:        12000          0      12000&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Note:''' According to Herbert, the kernel won't use any ''real'' swap as soon as the &amp;lt;code&amp;gt;rss.soft&amp;lt;/code&amp;gt; limit has been reached. Swapping will be done on the host level, not per vserver (see [http://list.linux-vserver.org/archive?mss:653:200801:iiagcodpghkhehndkgjg &amp;quot;free&amp;quot; command inside vserver]).&lt;/div&gt;</summary>
		<author><name>Petzsch</name></author>	</entry>

	<entry>
		<id>http://linux-vserver.at/util-vserver:Cgroups</id>
		<title>util-vserver:Cgroups</title>
		<link rel="alternate" type="text/html" href="http://linux-vserver.at/util-vserver:Cgroups"/>
				<updated>2010-02-19T12:52:19Z</updated>
		
		<summary type="html">&lt;p&gt;Petzsch: /* using cgroup to enforce memory limits */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Bears run away when you yell at them, even &amp;lt;tt&amp;gt;lynx&amp;lt;/tt&amp;gt;es.&lt;br /&gt;
&lt;br /&gt;
== Kernel configuration ==&lt;br /&gt;
&lt;br /&gt;
When configuring your kernel for cgroups with util-vserver you must make sure &amp;lt;tt&amp;gt;CONFIG_CGROUP_NS&amp;lt;/tt&amp;gt; ('''CGroup Namespaces''') is unset for the time being.&lt;br /&gt;
&lt;br /&gt;
'''CGroup Namespaces''' are a different approach to namespaces than that used by Linux vServer, and are not currently supported.&lt;br /&gt;
&lt;br /&gt;
== Prerequisites ==&lt;br /&gt;
&lt;br /&gt;
To use &amp;lt;tt&amp;gt;util-vserver&amp;lt;/tt&amp;gt;'s Control Groups (&amp;lt;tt&amp;gt;cgroups&amp;lt;/tt&amp;gt;) support, you need to have &amp;lt;tt&amp;gt;/dev/cgroup&amp;lt;/tt&amp;gt; mounted.&lt;br /&gt;
&lt;br /&gt;
Recent versions of &amp;lt;tt&amp;gt;util-vserver&amp;lt;/tt&amp;gt; sort this out for you by including the appropriate mount command in the &amp;lt;tt&amp;gt;util-vserver&amp;lt;/tt&amp;gt; &amp;lt;tt&amp;gt;init&amp;lt;/tt&amp;gt; (ie: runlevel) script included in the &amp;lt;tt&amp;gt;util-vserver&amp;lt;/tt&amp;gt; distribution, however this apparently only works for the &amp;lt;tt&amp;gt;sysv&amp;lt;/tt&amp;gt; &amp;lt;tt&amp;gt;init&amp;lt;/tt&amp;gt; script, and not the Debian or Gentoo ones.&lt;br /&gt;
&lt;br /&gt;
If you were to mount the &amp;lt;tt&amp;gt;cgroup&amp;lt;/tt&amp;gt; Control Groups filesystem manually, you would use something like:&lt;br /&gt;
: &amp;lt;tt&amp;gt;# mkdir /dev/cgroup&lt;br /&gt;
: # mount -t cgroup -o ''&amp;lt;subsystems&amp;gt;'' /dev/cgroup&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Where &amp;lt;tt&amp;gt;''&amp;lt;subsystems&amp;gt;''&amp;lt;/tt&amp;gt; is something like &amp;lt;tt&amp;gt;cpuset,memory&amp;lt;/tt&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
To avoid the need for manual configuration after reboot, on Gentoo you may wish to add the cgroup mount to &amp;lt;tt&amp;gt;/etc/fstab&amp;lt;/tt&amp;gt;.  For Debian see the live examples section at the bottom of this page.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
none /dev/cgroup cgroup cpu,cpuset,memory 0 2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Draft - Distributing cpu shares with cgroups ==&lt;br /&gt;
&lt;br /&gt;
From what i gathered in sched-design-CFS.txt [http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt]&lt;br /&gt;
&lt;br /&gt;
This is simply done by adjusting the cpu.shares. Just do:&lt;br /&gt;
&lt;br /&gt;
echo '512' &amp;gt; /dev/cgroup/&amp;lt;guest name&amp;gt;/cpu.shares&lt;br /&gt;
&lt;br /&gt;
The share you get is equal to the guest's share divided by the sum of the cpu shares of all the guest. So for exemple :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vserver guest 1 =&amp;gt; 512   &lt;br /&gt;
vserver guest 2 =&amp;gt; 512&lt;br /&gt;
vserver guest 3 =&amp;gt; 2048&lt;br /&gt;
vserver guest 4 =&amp;gt; 512&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
so you have a total of 3584 cpu shares (2048+512+512+512) , then you get :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vserver guest 1 =&amp;gt; 512 / 3584 = 14%  cpu&lt;br /&gt;
vserver guest 2 =&amp;gt; 512 / 3584 = 14%  cpu&lt;br /&gt;
vserver guest 3 =&amp;gt; 2048 / 3584 = 57% cpu&lt;br /&gt;
vserver guest 4 =&amp;gt; 512 / 3584 = 14%  cpu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note that this is fair scheduling and this will not enfore HARD limit (as far as i know).&lt;br /&gt;
&lt;br /&gt;
== Making share permanent with util vserver ==&lt;br /&gt;
&lt;br /&gt;
You must use the &amp;quot;cgroup&amp;quot; directory. You can apply defaults to all vservers or choose different settings for each guest:&lt;br /&gt;
&lt;br /&gt;
* /etc/vservers/.default/cgroup    , this directory contains settings applying to all guest when they start&lt;br /&gt;
* /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup , this directory contains settings for the guest when it starts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir /etc/vservers/.defaults/cgroup&lt;br /&gt;
mkdir /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup&lt;br /&gt;
echo '2048' &amp;gt; /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup/cpu.shares&lt;br /&gt;
# List of CPUs&lt;br /&gt;
echo 1 &amp;gt; /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup/cpuset.cpus&lt;br /&gt;
# NUMA nodes&lt;br /&gt;
echo 1 &amp;gt; /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup/cpuset.mems&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that /etc/vservers is an example, in my Aqueos install i use /usr/local/etc/vservers but /etc/vservers seems to be the defaults for the classic installs.&lt;br /&gt;
&lt;br /&gt;
Regards,&lt;br /&gt;
Ghislain.&lt;br /&gt;
&lt;br /&gt;
== cgroup and CFS based CPU hard limiting that replaces sched_hard ==&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
You can find documentation about the CFS hard limiting in &amp;lt;tt&amp;gt;Documentation/scheduler/sched-cfs-hard-limits.txt&amp;lt;/tt&amp;gt; inside your kernel source dir.&lt;br /&gt;
&lt;br /&gt;
===Requirements===&lt;br /&gt;
This feature is currently available in &amp;lt;tt&amp;gt;patch-2.6.31.2-vs2.3.0.36.15.diff&amp;lt;/tt&amp;gt; and is in testing phase as of this patch set so report any bugs to the mailing list.&lt;br /&gt;
&lt;br /&gt;
To get the hard limit setup on every vServer start you need a recent utils package. It worked for me with: &amp;lt;tt&amp;gt;0.30.216-pre2864&amp;lt;/tt&amp;gt;.  (Download from [http://people.linux-vserver.org/~dhozac/t/uv-testing/ util-vserver prereleases])&lt;br /&gt;
&lt;br /&gt;
Before trying to setup limits for one guest you should mount the cgroup filesystem:&lt;br /&gt;
&lt;br /&gt;
 [ -d /dev/cgroup ] || mkdir /dev/cgroup&lt;br /&gt;
 mount -t cgroup -ocpu none /dev/cgroup&lt;br /&gt;
&lt;br /&gt;
===Configuration===&lt;br /&gt;
Example for an upper bound of 2/5th (or 40%) of the all CPU power that a guest/cgroup can use :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# force CFS hard limit (only needed for older kernel versions)&lt;br /&gt;
# echo 1 &amp;gt; /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup/cpu.cfs_hard_limit&lt;br /&gt;
# time assigned to guest (in microseconds) 200000 = 0,2 sec &lt;br /&gt;
echo 200000 &amp;gt; /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup/cpu.cfs_runtime_us&lt;br /&gt;
# in each specified period (in microseconds) 500000 = 0,5 sec &lt;br /&gt;
echo 500000 &amp;gt; /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup/cpu.cfs_period_us&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This limit is an hard limit, see it like an upper wall for the resources used by the cgroup.&lt;br /&gt;
&lt;br /&gt;
If you set both CPU share AND hard limit the system will do fine but hard limits takes priority over CPU share scheduling, so CPU share will do the job but each cgroup will have an upper bound that it cannot cross even if the CPU share you gave it is higher.&lt;br /&gt;
&lt;br /&gt;
The hard limit feature adds 3 cgroup files for the CFS group scheduler:&lt;br /&gt;
* &amp;lt;tt&amp;gt;cfs_runtime_us&amp;lt;/tt&amp;gt;: Hard limit for the group in microseconds.&lt;br /&gt;
* &amp;lt;tt&amp;gt;cfs_period_us&amp;lt;/tt&amp;gt;: Time period in microseconds within which hard limits is enforced.&lt;br /&gt;
* &amp;lt;tt&amp;gt;cfs_hard_limit&amp;lt;/tt&amp;gt;: The control file to enable or disable hard limiting for the group.&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== using cgroup to enforce memory limits ==&lt;br /&gt;
&lt;br /&gt;
in linux-vserver patch version vs2.3.0.36.29 memory limiting by cgroup is introduced. to use it you need to have the following config lines in your kernel build (aditionally to the others mentioned for cgroup cpu limits):&lt;br /&gt;
&lt;br /&gt;
* CONFIG_RESOURCE_COUNTERS=y&lt;br /&gt;
* CONFIG_CGROUP_MEM_RES_CTLR=y&lt;br /&gt;
* CONFIG_CGROUP_MEM_RES_CTLR_SWAP=y&lt;br /&gt;
&lt;br /&gt;
make sure /dev/cgroup is mounted with -o...,memory to be able to use this feature. The following files let you adjust memory limits of a running vserver (create them in /etc/vservers/-vserver-name-&lt;br /&gt;
/cgroup/ to make them permanent):&lt;br /&gt;
&lt;br /&gt;
* memory.memsw.limit_in_bytes the total memory limit (memory+swap) of your cgroup context&lt;br /&gt;
* memory.limit_in_bytes the total memory limit&lt;br /&gt;
&lt;br /&gt;
values are stored in bytes. When writing to those files you can use suffixes: K,M,G.&lt;br /&gt;
&lt;br /&gt;
Note: cgroup memory limits are to replace rss.soft and rss.hard some time in the future.&lt;br /&gt;
When you wish the guests to see only their limited memory pool, be sure to include VIRT_MEM in your cflags config file.&lt;br /&gt;
&lt;br /&gt;
For a deeper understanding check out Documentation/cgroups/memory.txt of your kernel source tree.&lt;br /&gt;
&lt;br /&gt;
= Real world Examples of Scheduling =&lt;br /&gt;
&lt;br /&gt;
This section is for working and tested examples you have put in place.&lt;br /&gt;
&lt;br /&gt;
Please add the following information for each example you put here (use &amp;lt;tt&amp;gt;vserver-info&amp;lt;/tt&amp;gt;).&lt;br /&gt;
* Base kernel version&lt;br /&gt;
* vServer version&lt;br /&gt;
* Other kernel patches in use (&amp;lt;tt&amp;gt;grsec&amp;lt;/tt&amp;gt;, etc.)&lt;br /&gt;
* &amp;lt;tt&amp;gt;util-vserver&amp;lt;/tt&amp;gt; release&lt;br /&gt;
&lt;br /&gt;
== Ben's install on Debian Lenny ==&lt;br /&gt;
&lt;br /&gt;
I used the kernels from [http://repo.psand.net], described at [http://kernels.bristolwireless.net/]. I've done this on a few versions, works for 2.6.31.7 with patch vs2.3.0.36.27 on amd64, also 2.6.31.11 with patch vs2.3.0.36.28. I used the stock Lenny util-vserver, patched as described below. The kernel config is critically important, with specific cgroup options necessary in order to get cgroups working in this way. Check the configs for the [http://repo.psand.net] kernels to see which ones I used.&lt;br /&gt;
&lt;br /&gt;
==== Getting Lenny Ready ====&lt;br /&gt;
&lt;br /&gt;
There's a very old version of util-vserver on Lenny, it needs this patch applying before it will set the cgroups properly, it basically only adds one line:&lt;br /&gt;
&lt;br /&gt;
 --- /usr/lib/util-vserver/vserver.suexec.orig	2008-12-12 22:56:25.000000000 -0600&lt;br /&gt;
 +++ /usr/lib/util-vserver/vserver.suexec	2009-08-20 02:11:42.000000000 -0500&lt;br /&gt;
 @@ -22,7 +22,8 @@ test -z &amp;quot;$is_stopped&amp;quot; -o &amp;quot;$OPTION_INSECU&lt;br /&gt;
      exit 1&lt;br /&gt;
  }&lt;br /&gt;
  generateOptions  &amp;quot;$VSERVER_DIR&amp;quot;&lt;br /&gt;
 -addtoCPUSET  &amp;quot;$VSERVER_DIR&amp;quot;&lt;br /&gt;
 +addtoCPUSET      &amp;quot;$VSERVER_DIR&amp;quot;&lt;br /&gt;
 +attachToCgroup   &amp;quot;$VSERVER_DIR&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
  user=$1&lt;br /&gt;
  shift&lt;br /&gt;
&lt;br /&gt;
Next I added a correctly mounted cgroup file system on /dev/cgroup/. &lt;br /&gt;
&lt;br /&gt;
 $ mkdir /dev/cgroup&lt;br /&gt;
 $ mount -t cgroup vserver /dev/cgroup&lt;br /&gt;
&lt;br /&gt;
For the util-vserver to do the right thing, this directory needed adding too:&lt;br /&gt;
&lt;br /&gt;
 $ mkdir /etc/vservers/.defaults/cgroup&lt;br /&gt;
&lt;br /&gt;
==== Sharing out the CPU between guest servers ====&lt;br /&gt;
&lt;br /&gt;
I have a few test guests hanging around that I play with, call onetime, twotime, threetime, fourtime and fivetime. I order to set the shares for each guest I did this:&lt;br /&gt;
&lt;br /&gt;
 mkdir /etc/vservers/fivetime/cgroup/ /etc/vservers/fourtime/cgroup/ /etc/vservers/threetime/cgroup/ /etc/vservers/twotime/cgroup/ /etc/vservers/twotime/cgroup/&lt;br /&gt;
 echo &amp;quot;512&amp;quot; &amp;gt; /etc/vservers/fivetime/cgroup/cpu.shares&lt;br /&gt;
 echo &amp;quot;1024&amp;quot; &amp;gt; /etc/vservers/fourtime/cgroup/cpu.shares&lt;br /&gt;
 echo &amp;quot;1024&amp;quot; &amp;gt; /etc/vservers/threetime/cgroup/cpu.shares&lt;br /&gt;
 echo &amp;quot;1536&amp;quot; &amp;gt; /etc/vservers/twotime/cgroup/cpu.shares&lt;br /&gt;
 echo &amp;quot;1024&amp;quot; &amp;gt; /etc/vservers/onetime/cgroup/cpu.shares&lt;br /&gt;
&lt;br /&gt;
Then started the guests. When the system was loaded (I used one instance of cpuburn on each server - not advised but a useful test) they each should have got the following percentage of CPU.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Guest Name !! cpu.share given !! percentage of cpu&lt;br /&gt;
|-&lt;br /&gt;
| fivetime || 512 || 10% &lt;br /&gt;
|-&lt;br /&gt;
| fourtime || 1024 || 20%&lt;br /&gt;
|-&lt;br /&gt;
| threetime || 1024 || 20%&lt;br /&gt;
|-&lt;br /&gt;
| twotime || 1536 || 30%&lt;br /&gt;
|-&lt;br /&gt;
| onetime || 1024 || 20%&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
This didn't quite happen, as each process could migrate to other CPUs. When I fixed every guest to use only one of the available CPUs (see below how I did this) the percentage of processing time alloted to each guest were then pretty much exact! Each process was given exactly it's designated percentage of time according to vtop.&lt;br /&gt;
&lt;br /&gt;
==== Dishing out different processors sets to different guest servers ====&lt;br /&gt;
&lt;br /&gt;
The &amp;quot;cpuset&amp;quot; for each guest is the subset of CPUs which it is permitted to use. I found out the number of CPUs available on my system by doing this:&lt;br /&gt;
&lt;br /&gt;
 $ cat /dev/cgroup/cpuset.cpus&lt;br /&gt;
&lt;br /&gt;
This gave me the result 0-1, meaning that the overall set for my cgroups consists of CPUs 0 and 1 (for a quad core system one would expect the result 0-3, or for quad core with HT, 0-7). I stopped my guests, then for each guest specified a cpuset containing only CPU 0 for each of them:&lt;br /&gt;
&lt;br /&gt;
 $ echo &amp;quot;0&amp;quot; &amp;gt; /etc/vservers/onetime/cgroup/cpuset.cpus&lt;br /&gt;
 $ echo &amp;quot;0&amp;quot; &amp;gt; /etc/vservers/twotime/cgroup/cpuset.cpus&lt;br /&gt;
 $ echo &amp;quot;0&amp;quot; &amp;gt; /etc/vservers/threetime/cgroup/cpuset.cpus&lt;br /&gt;
 $ echo &amp;quot;0&amp;quot; &amp;gt; /etc/vservers/fourtime/cgroup/cpuset.cpus&lt;br /&gt;
 $ echo &amp;quot;0&amp;quot; &amp;gt; /etc/vservers/fivetime/cgroup/cpuset.cpus&lt;br /&gt;
&lt;br /&gt;
On restarting the guest, I could see (using vtop) that these guest were only using the CPU 0 (the column &amp;quot;Last used cpu (SMP)&amp;quot; needs to be on in vtop in order to see this). This set up isn't particularly useful, but did allow me to check that the cpu.shares I specified for my guest were working as expected.&lt;br /&gt;
&lt;br /&gt;
==== Doing this to servers live ====&lt;br /&gt;
&lt;br /&gt;
The parameters in the last two sections can be set when the servers are running. For example to move the guest &amp;quot;threetime&amp;quot; so that it could use both CPUs I did this:&lt;br /&gt;
&lt;br /&gt;
 $ echo &amp;quot;0-1&amp;quot; &amp;gt; /dev/cgroup/threetime/cpuset.cpus&lt;br /&gt;
&lt;br /&gt;
The processes running on threetime instantly were allocated cycle on both CPUs. Then:&lt;br /&gt;
&lt;br /&gt;
 $ echo &amp;quot;1&amp;quot; &amp;gt; /dev/cgroup/threetime/cpuset.cpus&lt;br /&gt;
&lt;br /&gt;
Shifts them all to CPU 1. One can change where cycles are allocated with impunity. The same with CPU shares:&lt;br /&gt;
&lt;br /&gt;
 $ echo &amp;quot;4096&amp;quot; &amp;gt; /dev/cgroup/threetime/cpu.shares&lt;br /&gt;
&lt;br /&gt;
Gave threetime a much bigger slice of the processors when it was under load.&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': The range &amp;quot;0-1&amp;quot; is not the only way of specifying a set of CPUs, I could have used &amp;quot;0,1&amp;quot;. On bigger systems, with say 8 CPUs one could use &amp;quot;0-2,4,5&amp;quot;, which would be the same as &amp;quot;0,1,2,4,5&amp;quot; or &amp;quot;0-2,4-5&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
==== Making sure all of this gets set up after a reboot ====&lt;br /&gt;
&lt;br /&gt;
This process will make sure /dev/cgroup is present at boot and correctly mounted:&lt;br /&gt;
&lt;br /&gt;
* patch util-vserver (see above)&lt;br /&gt;
* mkdir /etc/vservers/.defaults/cgroup&lt;br /&gt;
* mkdir /lib/udev/devices/cgroup (this will mean that the /dev/cgroup is created early in the boot process)&lt;br /&gt;
* add the following line to /etc/fstab&lt;br /&gt;
 vserver    /dev/cgroup    cgroup    defaults    0 0&lt;/div&gt;</summary>
		<author><name>Petzsch</name></author>	</entry>

	<entry>
		<id>http://linux-vserver.at/util-vserver:Cgroups</id>
		<title>util-vserver:Cgroups</title>
		<link rel="alternate" type="text/html" href="http://linux-vserver.at/util-vserver:Cgroups"/>
				<updated>2010-02-15T00:07:15Z</updated>
		
		<summary type="html">&lt;p&gt;Petzsch: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Bears run away when you yell at them, even lynxes. ,&lt;br /&gt;
&lt;br /&gt;
== Kernel configuration ==&lt;br /&gt;
&lt;br /&gt;
When configuring your kernel for cgroups with util-vserver you must make sure CONFIG_CGROUP_NS is unset so guests start properly for the time being.&lt;br /&gt;
&lt;br /&gt;
== Draft - Distributing cpu shares with cgroups ==&lt;br /&gt;
&lt;br /&gt;
From what i gathered in sched-design-CFS.txt [http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt]&lt;br /&gt;
&lt;br /&gt;
This is simply done by adjusting the cpu.shares. Just do:&lt;br /&gt;
&lt;br /&gt;
echo '512' &amp;gt; /dev/cgroup/&amp;lt;guest name&amp;gt;/cpu.shares&lt;br /&gt;
&lt;br /&gt;
The share you get is equal to the guest's share divided by the sum of the cpu shares of all the guest. So for exemple :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vserver guest 1 =&amp;gt; 512   &lt;br /&gt;
vserver guest 2 =&amp;gt; 512&lt;br /&gt;
vserver guest 3 =&amp;gt; 2048&lt;br /&gt;
vserver guest 4 =&amp;gt; 512&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
so you have a total of 3584 cpu shares (2048+512+512+512) , then you get :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vserver guest 1 =&amp;gt; 512 / 3584 = 14%  cpu&lt;br /&gt;
vserver guest 2 =&amp;gt; 512 / 3584 = 14%  cpu&lt;br /&gt;
vserver guest 3 =&amp;gt; 2048 / 3584 = 57% cpu&lt;br /&gt;
vserver guest 4 =&amp;gt; 512 / 3584 = 14%  cpu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note that this is fair scheduling and this will not enfore HARD limit (as far as i know).&lt;br /&gt;
&lt;br /&gt;
== Making share permanent with util vserver ==&lt;br /&gt;
&lt;br /&gt;
You must use the &amp;quot;cgroup&amp;quot; directory. You can apply defaults to all vservers or choose different settings for each guest:&lt;br /&gt;
&lt;br /&gt;
* /etc/vservers/.default/cgroup    , this directory contains settings applying to all guest when they start&lt;br /&gt;
* /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup , this directory contains settings for the guest when it starts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Example :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir /etc/vservers/.defaults/cgroup&lt;br /&gt;
mkdir /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup&lt;br /&gt;
echo '2048' &amp;gt; /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup/cpu.shares&lt;br /&gt;
# List of CPUs&lt;br /&gt;
echo 1 &amp;gt; /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup/cpuset.cpus&lt;br /&gt;
# NUMA nodes&lt;br /&gt;
echo 1 &amp;gt; /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup/cpuset.mems&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that /etc/vservers is an example, in my Aqueos install i use /usr/local/etc/vservers but /etc/vservers seems to be the defaults for the classic installs.&lt;br /&gt;
&lt;br /&gt;
Regards,&lt;br /&gt;
Ghislain.&lt;br /&gt;
&lt;br /&gt;
== cgroup and CFS based CPU hard limiting that replaces sched_hard ==&lt;br /&gt;
&lt;br /&gt;
You can find documentation about the cfs hard limiting in Documentation/scheduler/sched-cfs-hard-limits.txt inside your kernel source dir.&lt;br /&gt;
&lt;br /&gt;
This feature is currently available in patch-2.6.31.2-vs2.3.0.36.15.diff and is in testing phase as of this patch set so report any bugs to the mailing list.&lt;br /&gt;
&lt;br /&gt;
To get the hard limit setup on every vserver start you need a recent utils package. It worked for me with: 0.30.216-pre2864.&lt;br /&gt;
&lt;br /&gt;
Before trying to setup limits for one guest you should mount the cgroup filesystem:&lt;br /&gt;
&lt;br /&gt;
 [ -d /dev/cgroup ] || mkdir /dev/cgroup&lt;br /&gt;
 mount -t cgroup -ocpu none /dev/cgroup&lt;br /&gt;
&lt;br /&gt;
Example for an upper bound of 2/5th (or 40%) of the all cpu power that a guest/cgroup can use :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# force CFS hard limit (only needed for older kernel versions)&lt;br /&gt;
# echo 1 &amp;gt; /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup/cpu.cfs_hard_limit&lt;br /&gt;
# time assigned to guest (in microseconds) 200000 = 0,2 sec &lt;br /&gt;
echo 200000 &amp;gt; /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup/cpu.cfs_runtime_us&lt;br /&gt;
# in each specified period (in microseconds) 500000 = 0,5 sec &lt;br /&gt;
echo 500000 &amp;gt; /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup/cpu.cfs_period_us&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This limit is an hard limit, see it like an upper wall for the ressources used by the cgroup. &lt;br /&gt;
If you set both cpu share AND hard limit the system will do fine but hard limits takes priority over cpu share scheduling, so cpu share will do the job but each cgroup will have an upper bound that it cannot cross even if the cpu share you gived it is higher.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  Hard limit feature adds 3 cgroup files for CFS group scheduler:&lt;br /&gt;
cfs_runtime_us: Hard limit for the group in microseconds.&lt;br /&gt;
cfs_period_us: Time period in microseconds within which hard limits is enforced.&lt;br /&gt;
cfs_hard_limit: The control file to enable or disable hard limiting for the group.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== using cgroup to enforce memory limits ==&lt;br /&gt;
&lt;br /&gt;
in linux-vserver patch version vs2.3.0.36.29 memory limiting by cgroup is introduced. to use it you need to have the following config lines in your kernel build (aditionally to the others mentioned for cgroup cpu limits):&lt;br /&gt;
&lt;br /&gt;
* CONFIG_RESOURCE_COUNTERS=y&lt;br /&gt;
* CONFIG_CGROUP_MEM_RES_CTLR=y&lt;br /&gt;
* CONFIG_CGROUP_MEM_RES_CTLR_SWAP=y&lt;br /&gt;
&lt;br /&gt;
make sure /dev/cgroup is mounted with -o...,memory to be able to use this feature. The following files let you adjust memory limits of a running vserver (create them in /etc/vservers/-vserver-name-&lt;br /&gt;
/cgroup/ to make them permanent):&lt;br /&gt;
&lt;br /&gt;
* memory.memsw.limit_in_bytes the total memory limit (memory+swap) of your cgroup context&lt;br /&gt;
* memory.limit_in_bytes the total memory limit&lt;br /&gt;
&lt;br /&gt;
values are stored in bytes. When writing to those files you can use suffixes: K,M,G.&lt;br /&gt;
&lt;br /&gt;
Note: cgroup memory limits are to replace rss.soft and rss.hard some time in the future.&lt;br /&gt;
As for vs2.3.0.36.29 the tools top and free do not show the limited memory amount asigned to the guest.&lt;br /&gt;
&lt;br /&gt;
For a deeper understanding check out Documentation/cgroups/memory.txt of your kernel source tree.&lt;br /&gt;
&lt;br /&gt;
== real world exemples of scheduling ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
this part is to be filled with exemple you have put in place and are working and have been tested, please add the patch and kernel version for each exemple you put here.&lt;br /&gt;
&lt;br /&gt;
=== Ben's install on Debian Lenny ===&lt;br /&gt;
&lt;br /&gt;
I used the kernels from [http://repo.psand.net], described at [http://kernels.bristolwireless.net/]. I've done this on a few versions, works for 2.6.31.7 with patch vs2.3.0.36.27 on amd64, also 2.6.31.11 with patch vs2.3.0.36.28. Stock Lenny util-vserver, patched as described below. The kernel config is critically important, with specific cgroup options necessary in order to get cgroups working in this way. Options used in the kernels from repo.psand.net were:&lt;br /&gt;
&lt;br /&gt;
 CONFIG_CGROUP_SCHED=y&lt;br /&gt;
 CONFIG_CGROUPS=y&lt;br /&gt;
 # CONFIG_CGROUP_DEBUG is not set&lt;br /&gt;
 # CONFIG_CGROUP_NS is not set&lt;br /&gt;
 CONFIG_CGROUP_FREEZER=y&lt;br /&gt;
 CONFIG_CGROUP_DEVICE=y&lt;br /&gt;
 CONFIG_CGROUP_CPUACCT=y&lt;br /&gt;
 CONFIG_CGROUP_MEM_RES_CTLR=y&lt;br /&gt;
 # CONFIG_CGROUP_MEM_RES_CTLR_SWAP is not set&lt;br /&gt;
 CONFIG_NET_CLS_CGROUP=y&lt;br /&gt;
&lt;br /&gt;
==== Getting Lenny Ready ====&lt;br /&gt;
&lt;br /&gt;
There's a very old version of util-vserver on Lenny, it needs this patch applying before it will set the cgroups properly, it basically only adds one line. I patched my with this patch found on the Linux-Vserver mailing list:&lt;br /&gt;
&lt;br /&gt;
 --- /usr/lib/util-vserver/vserver.suexec.orig	2008-12-12 22:56:25.000000000 -0600&lt;br /&gt;
 +++ /usr/lib/util-vserver/vserver.suexec	2009-08-20 02:11:42.000000000 -0500&lt;br /&gt;
 @@ -22,7 +22,8 @@ test -z &amp;quot;$is_stopped&amp;quot; -o &amp;quot;$OPTION_INSECU&lt;br /&gt;
      exit 1&lt;br /&gt;
  }&lt;br /&gt;
  generateOptions  &amp;quot;$VSERVER_DIR&amp;quot;&lt;br /&gt;
 -addtoCPUSET  &amp;quot;$VSERVER_DIR&amp;quot;&lt;br /&gt;
 +addtoCPUSET      &amp;quot;$VSERVER_DIR&amp;quot;&lt;br /&gt;
 +attachToCgroup   &amp;quot;$VSERVER_DIR&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
  user=$1&lt;br /&gt;
  shift&lt;br /&gt;
&lt;br /&gt;
Next I added a correctly mounted cgroup file system on /dev/cgroup/. &lt;br /&gt;
&lt;br /&gt;
 $ mkdir /dev/cgroup&lt;br /&gt;
 $ mount -t cgroup vserver /dev/cgroup&lt;br /&gt;
&lt;br /&gt;
For the util-vserver to do the right thing, this directory needs adding too:&lt;br /&gt;
&lt;br /&gt;
 $ mkdir /etc/vservers/.defaults/cgroup&lt;br /&gt;
&lt;br /&gt;
==== Sharing out the CPU between guest servers ====&lt;br /&gt;
&lt;br /&gt;
I have a few test guests hanging around that I play with, call onetime, twotime, threetime, fourtime and fivetime. I order to set the shares for each guest I did this:&lt;br /&gt;
&lt;br /&gt;
 mkdir /etc/vservers/fivetime/cgroup/ /etc/vservers/fourtime/cgroup/ /etc/vservers/threetime/cgroup/ /etc/vservers/twotime/cgroup/ /etc/vservers/twotime/cgroup/&lt;br /&gt;
 echo &amp;quot;512&amp;quot; &amp;gt; /etc/vservers/fivetime/cgroup/cpu.shares&lt;br /&gt;
 echo &amp;quot;1024&amp;quot; &amp;gt; /etc/vservers/fourtime/cgroup/cpu.shares&lt;br /&gt;
 echo &amp;quot;1024&amp;quot; &amp;gt; /etc/vservers/threetime/cgroup/cpu.shares&lt;br /&gt;
 echo &amp;quot;1536&amp;quot; &amp;gt; /etc/vservers/twotime/cgroup/cpu.shares&lt;br /&gt;
 echo &amp;quot;1024&amp;quot; &amp;gt; /etc/vservers/onetime/cgroup/cpu.shares&lt;br /&gt;
&lt;br /&gt;
Then started the guests. When the system was loaded (I used one instance of cpuburn on each server - not advised but worked for me) they each should have got the following percentage of CPU.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Guest Name !! cpu.share given !! percentage of cpu&lt;br /&gt;
|-&lt;br /&gt;
| fivetime || 512 || 10% &lt;br /&gt;
|-&lt;br /&gt;
| fourtime || 1024 || 20%&lt;br /&gt;
|-&lt;br /&gt;
| threetime || 1024 || 20%&lt;br /&gt;
|-&lt;br /&gt;
| twotime || 1536 || 30%&lt;br /&gt;
|-&lt;br /&gt;
| onetime || 1024 || 20%&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
This didn't quite happen, as each process could migrate to other CPUs. I fixed every guest to use only one same CPU (see below how I did this) and the percentages were pretty much exact! Easy process was given exactly it's designated percentage of time according to vtop.&lt;br /&gt;
&lt;br /&gt;
==== Dishing out different processor to different guest servers ====&lt;br /&gt;
&lt;br /&gt;
To limit each guest the cpuset of each cgroup needs to be changed. I found out the number of CPUs available by doing this:&lt;br /&gt;
&lt;br /&gt;
 $ cat /dev/cgroup/cpuset.cpus&lt;br /&gt;
&lt;br /&gt;
Give me the result 0-1, meaning that the set consists of CPUs 0 and 1 (for a quad core system one would expect the result 0-3, or for quad core with HT, 0-7). I stopped my guests, then for each guest specified a cpuset limited to CPU 0 for each of them:&lt;br /&gt;
&lt;br /&gt;
 $ echo &amp;quot;0&amp;quot; &amp;gt; /etc/vservers/onetime/cgroup/cpuset.cpus&lt;br /&gt;
 $ echo &amp;quot;0&amp;quot; &amp;gt; /etc/vservers/twotime/cgroup/cpuset.cpus&lt;br /&gt;
 $ echo &amp;quot;0&amp;quot; &amp;gt; /etc/vservers/threetime/cgroup/cpuset.cpus&lt;br /&gt;
 $ echo &amp;quot;0&amp;quot; &amp;gt; /etc/vservers/fourtime/cgroup/cpuset.cpus&lt;br /&gt;
 $ echo &amp;quot;0&amp;quot; &amp;gt; /etc/vservers/fivetime/cgroup/cpuset.cpus&lt;br /&gt;
&lt;br /&gt;
This meant that, on restarting, I could see with vtop that these guest were only using the CPU 0 (column &amp;quot;Last used cpu (SMP)&amp;quot; needs to be on in vtop in order to see this). This set up isn't particularly useful, but did allow me to check that the percentages I had intended for my cpu shares were working.&lt;br /&gt;
&lt;br /&gt;
==== Doing this to servers live ====&lt;br /&gt;
&lt;br /&gt;
The parameters in the last two sections can be set when the servers are running. For example to move the guest &amp;quot;threetime&amp;quot; so that it could use both CPUs was I did this:&lt;br /&gt;
&lt;br /&gt;
 $ echo &amp;quot;0-1&amp;quot; &amp;gt; /dev/cgroup/threetime/cpuset.cpus&lt;br /&gt;
&lt;br /&gt;
The processes running on threetime instantly were allocated cycle on both CPUs. Then:&lt;br /&gt;
&lt;br /&gt;
 $ echo &amp;quot;1&amp;quot; &amp;gt; /dev/cgroup/threetime/cpuset.cpus&lt;br /&gt;
&lt;br /&gt;
Shifts them all to CPU 1. One can change where cycles are allocated with impunity. The same with CPU shares:&lt;br /&gt;
&lt;br /&gt;
 $ echo &amp;quot;4096&amp;quot; &amp;gt; /dev/cgroup/threetime/cpu.shares&lt;br /&gt;
&lt;br /&gt;
Gave threetime a much bigger slice of the processors when it was under load.&lt;br /&gt;
&lt;br /&gt;
'''NOTE''': The range &amp;quot;0-1&amp;quot; is not the only way of specifying a set of CPUs, I could have used &amp;quot;0,1&amp;quot;. On bigger systems, with say 8 CPUs one could use &amp;quot;0-2,4,5&amp;quot;, which would be the same as &amp;quot;0,1,2,4,5&amp;quot; or &amp;quot;0-2,4-5&amp;quot;.&lt;/div&gt;</summary>
		<author><name>Petzsch</name></author>	</entry>

	</feed>