Documents Ganeti version 2.8
Ganeti was developed to run on internal, trusted systems. As such, the security model is all-or-nothing.
Up to version 2.3 all Ganeti code ran as root. Since version 2.4 it is possible to run all daemons except the node daemon and the monitoring daemon as non-root users by specifying user names and groups at build time. The node daemon continues to require root privileges to create logical volumes, DRBD devices, start instances, etc. Cluster commands can be run as root or by users in a group specified at build time. The monitoring daemon requires root privileges in order to be able to access and present information that are only avilable to root (such as the output of the xm command of Xen).
For a host on which the Ganeti software has been installed, but not joined to a cluster, there are no changes to the system.
For a host that has been joined to the cluster, there are very important changes:
As you can see, as soon as a node is joined, it becomes equal to all other nodes in the cluster, and the security of the cluster is determined by the weakest node.
Note that only the SSH key will allow other machines to run any command on this node; the RPC method will run only:
It is therefore important to make sure that the contents of the /etc/ganeti/hooks and /etc/ganeti/restricted-commands directories are supervised and only trusted sources can populate them.
The restricted commands feature is new in Ganeti 2.7. It enables the administrator to run any commands in the /etc/ganeti/restricted-commands directory, if the feature has been enabled at build time, subject to the following restrictions:
Note that it’s not possible to list the contents of the directory, and there is an intentional delay when trying to execute a non-existing command (to slow-down dictionary attacks).
Since for Ganeti itself this functionality is not needed, and is only provided as a way to help administrate or recover nodes, it is a local site decision whether to enable or not the restricted commands feature.
By default, this feature is disabled.
As mentioned above, there are multiple ways of communication between cluster nodes:
The SSH traffic is protected (after the initial login to a new node) by the cluster-wide shared SSH key.
RPC communication between the master and nodes is protected using SSL/TLS encryption. Both the client and the server must have the cluster-wide shared SSL/TLS certificate and verify it when establishing the connection by comparing fingerprints. We decided not to use a CA to simplify the key handling.
The DRBD traffic is not protected by encryption, as DRBD does not support this. It’s therefore recommended to implement host-level firewalling or to use a separate range of IP addresses for the DRBD traffic (this is supported in Ganeti through the use of a secondary interface) which is not routed outside the cluster. DRBD connections are protected from erroneous connections to other machines (as may happen due to software issues), and from accepting connections from other machines, by using a shared secret, exchanged via RPC requests from the master to the nodes when configuring the device.
The command-line tools to master daemon communication is done via a UNIX socket, whose permissions are reset to 0660 after listening but before serving requests. This permission-based protection is documented and works on Linux, but is not-portable; however, Ganeti doesn’t work on non-Linux system at the moment.
The luxid daemon (automatically enabled if confd is enabled at build time) serves local (UNIX socket) queries about the run-time configuration. Answering these means talking to other cluster nodes, exactly as masterd does. See the notes for masterd regarding permission-based protection.
In Ganeti 2.8, the confd daemon (if enabled at build time), serves network-originated queries about parts of the static cluster configuration.
If Ganeti is not configured (at build time) to use separate users, confd has access to all Ganeti related files (including internal RPC SSL certificates). This makes it a bit more sensitive to bugs (a remote attacker could get direct access to the intra-cluster RPC), so to harden security it’s recommended to:
The monitoring daemon provides information about the status and the performance of the cluster over HTTP. It is currently unencrypted and non-authenticated, therefore it is strongly advised to set proper firewalling rules to prevent unwanted access.
The monitoring daemon runs as root, because it needs to be able to access privileged information (such as the state of the instances as provided by the Xen hypervisor). Nevertheless, the security implications are mitigated by the fact that the agent only provides reporting functionalities, without the ability to actually modify the state of the cluster.
Starting with Ganeti 2.0, Remote API traffic is encrypted using SSL/TLS by default. It supports Basic authentication as per RFC 2617. Users can be granted different capabilities. Details can be found in the RAPI documentation.
Paths for certificate, private key and CA files required for SSL/TLS will be set at source configure time. Symlinks or command line parameters may be used to use different files.
To move instances between clusters, different clusters must be able to communicate with each other over a secure channel. Up to and including Ganeti 2.1, clusters were self-contained entities and had no knowledge of other clusters. With Ganeti 2.2, clusters can exchange data if tokens (an encryption certificate) was exchanged by a trusted third party before.
When running KVM instances under Ganeti three security models ara available: “none”, “user” and “pool”.
Under security model “none” instances run by default as root. This means that, if an instance gets jail broken, it will be able to own the host node, and thus the ganeti cluster. This is the default model, and the only one available before Ganeti 2.1.2.
Under security model “user” an instance is run as the user specified by the hypervisor parameter “security_domain”. This makes it easy to run all instances as non privileged users, and allows one to manually allocate specific users to specific instances or sets of instances. If the specified user doesn’t have permissions a jail broken instance will need some local privilege escalation before being able to take over the node and the cluster. It’s possible though for a jail broken instance to affect other ones running under the same user.
Under security model “pool” a global cluster-level uid pool is used to start each instance on the same node under a different user. The uids in the cluster pool can be set with gnt-cluster init and gnt-cluster modify, and must correspond to existing users on all nodes. Ganeti will then allocate one to each instance, as needed. This way a jail broken instance won’t be able to affect any other. Since the users are handed out by ganeti in a per-node randomized way, in this mode there is no way to make sure a particular instance is always run as a certain user. Use mode “user” for that.
In addition to these precautions, if you want to avoid instances sending traffic on your node network, you can use an iptables rule such as:
iptables -A OUTPUT -m owner --uid-owner <uid>[-<uid>] -j LOG \
--log-prefix "ganeti uid pool user network traffic"
iptables -A OUTPUT -m owner --uid-owner <uid>[-<uid>] -j DROP
This won’t affect regular instance traffic (that comes out of the tapX allocated to the instance, and can be filtered or subject to appropriate policy routes) but will stop any user generated traffic that might come from a jailbroken instance.