Manually managing the bay nodes
Magnum manages bay nodes by using ResourceGroup from Heat. This approach works but it is infeasible to manage the heterogeneity across bay nodes, which is a frequently demanded feature. As an example, there is a request to provision bay nodes across availability zones [1]. There is another request to provision bay nodes with different set of flavors [2]. For the request features above, ResourceGroup won't work very well.
The proposal is to remove the usage of ResourceGroup and manually create Heat stack for each bay nodes. For example, for creating a cluster with 2 masters and 3 minions, Magnum is going to manage 6 Heat stacks (instead of 1 big Heat stack as right now):
* A kube cluster stack that manages the global resources
* Two kube master stacks that manage the two master nodes
* Three kube minion stacks that manage the three minion nodes
The proposal might require an additional API endpoint to manage nodes or a group of nodes, but it should be addressed in a separated blueprint
[1] https:/
[2] https:/
Whiteboard
Below are the implementation options:
* Option 1:
Implement it in Heat templates declaratively. For example, if users want to create a cluster with 5 nodes, Magnum will generate a set of mappings of parameters for each node. For example:
$ heat stack-create -f cluster.yaml \
-P count=5 \
-P az_map=
-P flavor_
Inside the top-level template, it contains a single resource group. The trick is passing %index% to the nested template.
$ cat cluster.yaml
heat_
parameters:
count:
type: integer
az_map:
type: json
flavor_map:
type: json
resources:
AGroup:
type: OS::Heat:
properties:
count: {get_param: count}
type: server.yaml
index: '%index%'
In the nested template, use 'index' to retrieve the parameters.
$ cat server.yaml
heat_
parameters:
availabilit
type: json
flavor_map:
type: json
index:
type: string
resources:
server:
type: OS::Nova::Server
properties:
image: the_image
flavor: {get_param: [flavor_map, {get_param: index}]}
This approach has a critical drawback. As pointed out by Zane [1], we cannot remove member from the middle of the list. Therefore, the usage of resource group was not recommended.
* Option 2:
Generate Heat template by using the generator [2]. The code to generate the Heat template will be something like below:
$ cat generator.py
from os_hotgen import composer
from os_hotgen import heat
tmpl_a = heat.Template(
...
for group in rsr_groups:
# parameters
param_name = group.name + '_flavor'
param_type = 'string'
param_flavor = heat.Parameter(
tmpl_
param_name = group.name + '_az'
param_type = 'string'
param_az = heat.Parameter(
tmpl_
...
# resources
rsc = heat.Resource(
resource_def = {
'type': 'server.yaml',
...
}
}
resource_
rsc.
count_prp = heat.ResourcePr
rsc.
tmpl_
...
print composer.
* Option 3:
Remove the usage of ResourceGroup and manually manage Heat stacks for each bay node. For example, for a cluster with 5 nodes, Magnum is going to create 5 Heat stacks:
for node in nodes:
fields = {
...
},
...
}
osc.
=======
I have a question for option3. Does this option require users to specify flavor for each node or does it allow users to specify node count for a flavor, e.g., 10 nodes for flavor_x, 5 nodes for flavor_y, etc.?
Will it scale for large cluster, e.g., hundred of nodes? Thanks! --wanyen