<?xml version="1.0"?>
<?xml-stylesheet type="text/css" href="http://linux-vserver.at/skins/common/feed.css?303"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>http://linux-vserver.at/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Kbad</id>
		<title>Linux-VServer - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="http://linux-vserver.at/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Kbad"/>
		<link rel="alternate" type="text/html" href="http://linux-vserver.at/Special:Contributions/Kbad"/>
		<updated>2026-04-09T22:48:06Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.20.2</generator>

	<entry>
		<id>http://linux-vserver.at/util-vserver:Cgroups</id>
		<title>util-vserver:Cgroups</title>
		<link rel="alternate" type="text/html" href="http://linux-vserver.at/util-vserver:Cgroups"/>
				<updated>2009-12-02T19:46:13Z</updated>
		
		<summary type="html">&lt;p&gt;Kbad: /* Kernel configuration */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Bears run away when you yell at them, even lynxes. ,&lt;br /&gt;
&lt;br /&gt;
== Kernel configuration ==&lt;br /&gt;
&lt;br /&gt;
When configuring your kernel for cgroups with util-vserver you must make sure CONFIG_CGROUP_NS is unset so guests start properly for the time being.&lt;br /&gt;
&lt;br /&gt;
== Draft - Distributing cpu shares with cgroups ==&lt;br /&gt;
&lt;br /&gt;
From what i gathered in sched-design-CFS.txt [http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt]&lt;br /&gt;
&lt;br /&gt;
This is simply done by adjusting the cpu.shares. Just do:&lt;br /&gt;
&lt;br /&gt;
echo '512' &amp;gt; /dev/cgroup/&amp;lt;guest name&amp;gt;/cpu.shares&lt;br /&gt;
&lt;br /&gt;
The share you get is equal to the guest's share divided by the sum of the cpu shares of all the guest. So for exemple :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vserver guest 1 =&amp;gt; 512   &lt;br /&gt;
vserver guest 2 =&amp;gt; 512&lt;br /&gt;
vserver guest 3 =&amp;gt; 2048&lt;br /&gt;
vserver guest 4 =&amp;gt; 512&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
so you have a total of 3584 cpu shares (2048+512+512+512) , then you get :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vserver guest 1 =&amp;gt; 512 / 3584 = 14%  cpu&lt;br /&gt;
vserver guest 2 =&amp;gt; 512 / 3584 = 14%  cpu&lt;br /&gt;
vserver guest 3 =&amp;gt; 2048 / 3584 = 57% cpu&lt;br /&gt;
vserver guest 4 =&amp;gt; 512 / 3584 = 14%  cpu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note that this is fair scheduling and this will not enfore HARD limit (as far as i know).&lt;br /&gt;
&lt;br /&gt;
== Making share permanent with util vserver ==&lt;br /&gt;
&lt;br /&gt;
You must use the &amp;quot;cgroup&amp;quot; directory. You can apply defaults to all vservers or choose different settings for each guest:&lt;br /&gt;
&lt;br /&gt;
* /etc/vservers/.default/cgroup    , this directory contains settings applying to all guest when they start&lt;br /&gt;
* /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup , this directory contains settings for the guest when it starts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Exemple :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir /etc/vservers/.defaults/cgroup&lt;br /&gt;
mkdir /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup&lt;br /&gt;
echo '2048' &amp;gt; /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup/cpu.shares&lt;br /&gt;
# List of CPUs&lt;br /&gt;
echo 1 &amp;gt; /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup/cpuset.cpus&lt;br /&gt;
# NUMA nodes&lt;br /&gt;
echo 1 &amp;gt; /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup/cpuset.mems&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that /etc/vservers is an exemple, in my Aqueos install i use /usr/local/etc/vservers but /etc/vservers seems to be the defaults for the classic installs.&lt;br /&gt;
&lt;br /&gt;
Regards,&lt;br /&gt;
Ghislain.&lt;br /&gt;
&lt;br /&gt;
== cgroup and CFS based CPU hard limiting that replaces sched_hard ==&lt;br /&gt;
&lt;br /&gt;
This feature is currently available in patch-2.6.31.2-vs2.3.0.36.15.diff and is in testing phase as of this patch set so report any bugs to the mailing list.&lt;br /&gt;
&lt;br /&gt;
Example for an upper bound of 2/5th (or 40%) of the all cpu power that a guest/cgroup can use :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# force CFS hard limit&lt;br /&gt;
echo 1 &amp;gt; /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup/cpu.cfs_hard_limit&lt;br /&gt;
# time assigned to guest (in microseconds) 200000 = 0,2 sec &lt;br /&gt;
echo 200000 &amp;gt; /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup/cpu.cfs_runtime_us&lt;br /&gt;
# in each specified period (in microseconds) 500000 = 0,5 sec &lt;br /&gt;
echo 500000 &amp;gt; /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup/cpu.cfs_period_us&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This limit is an hard limit, see it like an upper wall for the ressources used by the cgroup. &lt;br /&gt;
If you set both cpu share AND hard limit the system will do fine but hard limits takes priority over cpu share scheduling, so cpu share will do the job but each cgroup will have an upper bound that it cannot cross even if the cpu share you gived it is higher.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  Hard limit feature adds 3 cgroup files for CFS group scheduler:&lt;br /&gt;
cfs_runtime_us: Hard limit for the group in microseconds.&lt;br /&gt;
cfs_period_us: Time period in microseconds within which hard limits is enforced.&lt;br /&gt;
cfs_hard_limit: The control file to enable or disable hard limiting for the group.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== real world exemples of scheduling ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
this part is to be filled with exemple you have put in place and are working and have been tested, please add the patch and kernel version for each exemple you put here.&lt;/div&gt;</summary>
		<author><name>Kbad</name></author>	</entry>

	<entry>
		<id>http://linux-vserver.at/util-vserver:Cgroups</id>
		<title>util-vserver:Cgroups</title>
		<link rel="alternate" type="text/html" href="http://linux-vserver.at/util-vserver:Cgroups"/>
				<updated>2009-12-02T19:41:27Z</updated>
		
		<summary type="html">&lt;p&gt;Kbad: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Bears run away when you yell at them, even lynxes. ,&lt;br /&gt;
&lt;br /&gt;
== Kernel configuration ==&lt;br /&gt;
&lt;br /&gt;
When configuring your kernel for cgroups with vserver you must make sure CONFIG_CGROUP_NS is unset so guests start properly.&lt;br /&gt;
&lt;br /&gt;
== Draft - Distributing cpu shares with cgroups ==&lt;br /&gt;
&lt;br /&gt;
From what i gathered in sched-design-CFS.txt [http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt]&lt;br /&gt;
&lt;br /&gt;
This is simply done by adjusting the cpu.shares. Just do:&lt;br /&gt;
&lt;br /&gt;
echo '512' &amp;gt; /dev/cgroup/&amp;lt;guest name&amp;gt;/cpu.shares&lt;br /&gt;
&lt;br /&gt;
The share you get is equal to the guest's share divided by the sum of the cpu shares of all the guest. So for exemple :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vserver guest 1 =&amp;gt; 512   &lt;br /&gt;
vserver guest 2 =&amp;gt; 512&lt;br /&gt;
vserver guest 3 =&amp;gt; 2048&lt;br /&gt;
vserver guest 4 =&amp;gt; 512&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
so you have a total of 3584 cpu shares (2048+512+512+512) , then you get :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vserver guest 1 =&amp;gt; 512 / 3584 = 14%  cpu&lt;br /&gt;
vserver guest 2 =&amp;gt; 512 / 3584 = 14%  cpu&lt;br /&gt;
vserver guest 3 =&amp;gt; 2048 / 3584 = 57% cpu&lt;br /&gt;
vserver guest 4 =&amp;gt; 512 / 3584 = 14%  cpu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note that this is fair scheduling and this will not enfore HARD limit (as far as i know).&lt;br /&gt;
&lt;br /&gt;
== Making share permanent with util vserver ==&lt;br /&gt;
&lt;br /&gt;
You must use the &amp;quot;cgroup&amp;quot; directory. You can apply defaults to all vservers or choose different settings for each guest:&lt;br /&gt;
&lt;br /&gt;
* /etc/vservers/.default/cgroup    , this directory contains settings applying to all guest when they start&lt;br /&gt;
* /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup , this directory contains settings for the guest when it starts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Exemple :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir /etc/vservers/.defaults/cgroup&lt;br /&gt;
mkdir /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup&lt;br /&gt;
echo '2048' &amp;gt; /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup/cpu.shares&lt;br /&gt;
# List of CPUs&lt;br /&gt;
echo 1 &amp;gt; /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup/cpuset.cpus&lt;br /&gt;
# NUMA nodes&lt;br /&gt;
echo 1 &amp;gt; /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup/cpuset.mems&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that /etc/vservers is an exemple, in my Aqueos install i use /usr/local/etc/vservers but /etc/vservers seems to be the defaults for the classic installs.&lt;br /&gt;
&lt;br /&gt;
Regards,&lt;br /&gt;
Ghislain.&lt;br /&gt;
&lt;br /&gt;
== cgroup and CFS based CPU hard limiting that replaces sched_hard ==&lt;br /&gt;
&lt;br /&gt;
This feature is currently available in patch-2.6.31.2-vs2.3.0.36.15.diff and is in testing phase as of this patch set so report any bugs to the mailing list.&lt;br /&gt;
&lt;br /&gt;
Example for an upper bound of 2/5th (or 40%) of the all cpu power that a guest/cgroup can use :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# force CFS hard limit&lt;br /&gt;
echo 1 &amp;gt; /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup/cpu.cfs_hard_limit&lt;br /&gt;
# time assigned to guest (in microseconds) 200000 = 0,2 sec &lt;br /&gt;
echo 200000 &amp;gt; /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup/cpu.cfs_runtime_us&lt;br /&gt;
# in each specified period (in microseconds) 500000 = 0,5 sec &lt;br /&gt;
echo 500000 &amp;gt; /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup/cpu.cfs_period_us&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This limit is an hard limit, see it like an upper wall for the ressources used by the cgroup. &lt;br /&gt;
If you set both cpu share AND hard limit the system will do fine but hard limits takes priority over cpu share scheduling, so cpu share will do the job but each cgroup will have an upper bound that it cannot cross even if the cpu share you gived it is higher.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  Hard limit feature adds 3 cgroup files for CFS group scheduler:&lt;br /&gt;
cfs_runtime_us: Hard limit for the group in microseconds.&lt;br /&gt;
cfs_period_us: Time period in microseconds within which hard limits is enforced.&lt;br /&gt;
cfs_hard_limit: The control file to enable or disable hard limiting for the group.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== real world exemples of scheduling ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
this part is to be filled with exemple you have put in place and are working and have been tested, please add the patch and kernel version for each exemple you put here.&lt;/div&gt;</summary>
		<author><name>Kbad</name></author>	</entry>

	<entry>
		<id>http://linux-vserver.at/util-vserver:Cgroups</id>
		<title>util-vserver:Cgroups</title>
		<link rel="alternate" type="text/html" href="http://linux-vserver.at/util-vserver:Cgroups"/>
				<updated>2009-12-02T19:40:55Z</updated>
		
		<summary type="html">&lt;p&gt;Kbad: /* Draft - Distributing cpu shares with cgroups */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Bears run away when you yell at them, even lynxes. ,&lt;br /&gt;
&lt;br /&gt;
== Draft - Distributing cpu shares with cgroups ==&lt;br /&gt;
&lt;br /&gt;
When configuring your kernel for cgroups with vserver you must make sure CONFIG_CGROUP_NS is unset so guests start properly.&lt;br /&gt;
&lt;br /&gt;
From what i gathered in sched-design-CFS.txt [http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt]&lt;br /&gt;
&lt;br /&gt;
This is simply done by adjusting the cpu.shares. Just do:&lt;br /&gt;
&lt;br /&gt;
echo '512' &amp;gt; /dev/cgroup/&amp;lt;guest name&amp;gt;/cpu.shares&lt;br /&gt;
&lt;br /&gt;
The share you get is equal to the guest's share divided by the sum of the cpu shares of all the guest. So for exemple :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vserver guest 1 =&amp;gt; 512   &lt;br /&gt;
vserver guest 2 =&amp;gt; 512&lt;br /&gt;
vserver guest 3 =&amp;gt; 2048&lt;br /&gt;
vserver guest 4 =&amp;gt; 512&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
so you have a total of 3584 cpu shares (2048+512+512+512) , then you get :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
vserver guest 1 =&amp;gt; 512 / 3584 = 14%  cpu&lt;br /&gt;
vserver guest 2 =&amp;gt; 512 / 3584 = 14%  cpu&lt;br /&gt;
vserver guest 3 =&amp;gt; 2048 / 3584 = 57% cpu&lt;br /&gt;
vserver guest 4 =&amp;gt; 512 / 3584 = 14%  cpu&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note that this is fair scheduling and this will not enfore HARD limit (as far as i know).&lt;br /&gt;
&lt;br /&gt;
== Making share permanent with util vserver ==&lt;br /&gt;
&lt;br /&gt;
You must use the &amp;quot;cgroup&amp;quot; directory. You can apply defaults to all vservers or choose different settings for each guest:&lt;br /&gt;
&lt;br /&gt;
* /etc/vservers/.default/cgroup    , this directory contains settings applying to all guest when they start&lt;br /&gt;
* /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup , this directory contains settings for the guest when it starts.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Exemple :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir /etc/vservers/.defaults/cgroup&lt;br /&gt;
mkdir /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup&lt;br /&gt;
echo '2048' &amp;gt; /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup/cpu.shares&lt;br /&gt;
# List of CPUs&lt;br /&gt;
echo 1 &amp;gt; /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup/cpuset.cpus&lt;br /&gt;
# NUMA nodes&lt;br /&gt;
echo 1 &amp;gt; /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup/cpuset.mems&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that /etc/vservers is an exemple, in my Aqueos install i use /usr/local/etc/vservers but /etc/vservers seems to be the defaults for the classic installs.&lt;br /&gt;
&lt;br /&gt;
Regards,&lt;br /&gt;
Ghislain.&lt;br /&gt;
&lt;br /&gt;
== cgroup and CFS based CPU hard limiting that replaces sched_hard ==&lt;br /&gt;
&lt;br /&gt;
This feature is currently available in patch-2.6.31.2-vs2.3.0.36.15.diff and is in testing phase as of this patch set so report any bugs to the mailing list.&lt;br /&gt;
&lt;br /&gt;
Example for an upper bound of 2/5th (or 40%) of the all cpu power that a guest/cgroup can use :&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
# force CFS hard limit&lt;br /&gt;
echo 1 &amp;gt; /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup/cpu.cfs_hard_limit&lt;br /&gt;
# time assigned to guest (in microseconds) 200000 = 0,2 sec &lt;br /&gt;
echo 200000 &amp;gt; /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup/cpu.cfs_runtime_us&lt;br /&gt;
# in each specified period (in microseconds) 500000 = 0,5 sec &lt;br /&gt;
echo 500000 &amp;gt; /etc/vservers/&amp;lt;guestname&amp;gt;/cgroup/cpu.cfs_period_us&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This limit is an hard limit, see it like an upper wall for the ressources used by the cgroup. &lt;br /&gt;
If you set both cpu share AND hard limit the system will do fine but hard limits takes priority over cpu share scheduling, so cpu share will do the job but each cgroup will have an upper bound that it cannot cross even if the cpu share you gived it is higher.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  Hard limit feature adds 3 cgroup files for CFS group scheduler:&lt;br /&gt;
cfs_runtime_us: Hard limit for the group in microseconds.&lt;br /&gt;
cfs_period_us: Time period in microseconds within which hard limits is enforced.&lt;br /&gt;
cfs_hard_limit: The control file to enable or disable hard limiting for the group.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== real world exemples of scheduling ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
this part is to be filled with exemple you have put in place and are working and have been tested, please add the patch and kernel version for each exemple you put here.&lt;/div&gt;</summary>
		<author><name>Kbad</name></author>	</entry>

	</feed>