ganeti-3.1: Cluster-based virtualization management software
Safe HaskellSafe-Inferred
LanguageHaskell2010

Ganeti.HTools.Node

Description

Module describing a node.

All updates are functional (copy-based) and return a new node with updated value.

Synopsis

Documentation

data Node #

The node type.

Constructors

Node 

Fields

  • name :: String

    The node name

  • alias :: String

    The shortened name (for display purposes)

  • tMem :: Double

    Total memory (MiB) (state-of-world)

  • nMem :: Int

    Node memory (MiB) (state-of-record)

  • iMem :: Int

    Instance memory (MiB) (state-of-record)

  • fMem :: Int

    Free memory (MiB) (state-of-world)

  • fMemForth :: Int

    Free memory (MiB) including forthcoming instances TODO: Use state of record calculations for forthcoming instances (see unallocatedMem)

  • xMem :: Int

    Unaccounted memory (MiB)

  • tDsk :: Double

    Total disk space (MiB)

  • fDsk :: Int

    Free disk space (MiB)

  • fDskForth :: Int

    Free disk space (MiB) including forthcoming instances

  • tCpu :: Double

    Total CPU count

  • tCpuSpeed :: Double

    Relative CPU speed

  • nCpu :: Int

    VCPUs used by the node OS

  • uCpu :: Int

    Used VCPU count

  • uCpuForth :: Int

    Used VCPU count including forthcoming instances

  • tSpindles :: Int

    Node spindles (spindle_count node parameter, or actual spindles, see note below)

  • fSpindles :: Int

    Free spindles (see note below)

  • fSpindlesForth :: Int

    Free spindles (see note below) including forthcoming instances

  • pList :: [Idx]

    List of primary instance indices

  • pListForth :: [Idx]

    List of primary instance indices including forthcoming instances

  • sList :: [Idx]

    List of secondary instance indices

  • sListForth :: [Idx]

    List of secondary instance indices including forthcoming instances

  • idx :: Ndx

    Internal index for book-keeping

  • peers :: PeerMap

    Pnode to instance mapping

  • failN1 :: Bool

    Whether the node has failed n1

  • failN1Forth :: Bool

    Whether the node has failed n1, including forthcoming instances

  • rMem :: Int

    Maximum memory needed for failover by primaries of this node

  • rMemForth :: Int

    Maximum memory needed for failover by primaries of this node, including forthcoming instances

  • pMem :: Double

    Percent of free memory

  • pMemForth :: Double

    Percent of free memory including forthcoming instances

  • pDsk :: Double

    Percent of free disk

  • pDskForth :: Double

    Percent of free disk including forthcoming instances

  • pRem :: Double

    Percent of reserved memory

  • pRemForth :: Double

    Percent of reserved memory including forthcoming instances

  • pCpu :: Double

    Ratio of virtual to physical CPUs

  • pCpuForth :: Double

    Ratio of virtual to physical CPUs including forthcoming instances

  • mDsk :: Double

    Minimum free disk ratio

  • loDsk :: Int

    Autocomputed from mDsk low disk threshold

  • hiCpu :: Int

    Autocomputed from mCpu high cpu threshold

  • hiSpindles :: Double

    Limit auto-computed from policy spindle_ratio and the node spindle count (see note below)

  • instSpindles :: Double

    Spindles used by instances (see note below)

  • instSpindlesForth :: Double

    Spindles used by instances (see note below) including forthcoming instances

  • offline :: Bool

    Whether the node should not be used for allocations and skipped from score computations

  • isMaster :: Bool

    Whether the node is the master node

  • nTags :: [String]

    The node tags for this node

  • utilPool :: DynUtil

    Total utilisation capacity

  • utilLoad :: DynUtil

    Sum of instance utilisation

  • utilLoadForth :: DynUtil

    Sum of instance utilisation, including forthcoming instances

  • pTags :: TagMap

    Primary instance exclusion tags and their count, including forthcoming instances

  • group :: Gdx

    The node's group (index)

  • iPolicy :: IPolicy

    The instance policy (of the node's group)

  • exclStorage :: Bool

    Effective value of exclusive_storage

  • migTags :: Set String

    migration-relevant tags

  • rmigTags :: Set String

    migration tags able to receive

  • locationTags :: Set String

    common-failure domains the node belongs to

  • locationScore :: Int

    Sum of instance location and desired location scores

  • instanceMap :: Map (String, String) Int

    Number of instances with each exclusion/location tags pair

  • hypervisor :: Maybe Hypervisor

    Active hypervisor on the node

Instances

Instances details
Show Node # 
Instance details

Defined in Ganeti.HTools.Node

Methods

showsPrec :: Int -> Node -> ShowS

show :: Node -> String

showList :: [Node] -> ShowS

Element Node # 
Instance details

Defined in Ganeti.HTools.Node

Methods

nameOf :: Node -> String #

allNames :: Node -> [String] #

idxOf :: Node -> Int #

setAlias :: Node -> String -> Node #

computeAlias :: String -> Node -> Node #

setIdx :: Node -> Int -> Node #

Eq Node # 
Instance details

Defined in Ganeti.HTools.Node

Methods

(==) :: Node -> Node -> Bool

(/=) :: Node -> Node -> Bool

type List = Container Node #

A simple name for a node map.

pCpuEff :: Node -> Double #

Derived parameter: ratio of virutal to physical CPUs, weighted by CPU speed.

pCpuEffForth :: Node -> Double #

Derived parameter: ratio of virutal to physical CPUs, weighted by CPU speed and taking forthcoming instances into account.

Constructor

create :: String -> Double -> Int -> Int -> Double -> Int -> Double -> Int -> Bool -> Int -> Int -> Gdx -> Bool -> Node #

Create a new node.

The index and the peers maps are empty, and will be need to be update later via the setIdx and buildPeers functions.

Finalization after data loading

buildPeers :: Node -> List -> Node #

Builds the peer map for a given node.

setIdx :: Node -> Ndx -> Node #

Changes the index.

This is used only during the building of the data structures.

setAlias :: Node -> String -> Node #

Changes the alias.

This is used only during the building of the data structures.

setOffline :: Node -> Bool -> Node #

Sets the offline attribute.

setPri :: Node -> Instance -> Node #

Assigns an instance to a node as primary and update the used VCPU count, utilisation data, tags map and desired location score.

setSec :: Node -> Instance -> Node #

Assigns an instance to a node as secondary and updates disk utilisation.

setMaster :: Node -> Bool -> Node #

Sets the master attribute

setNodeTags :: Node -> [String] -> Node #

Sets the node tags attribute

setMdsk :: Node -> Double -> Node #

Sets the max disk usage ratio.

setMcpu :: Node -> Double -> Node #

Sets the max cpu usage ratio. This will update the node's ipolicy, losing sharing (but it should be a seldomly done operation).

setPolicy :: IPolicy -> Node -> Node #

Sets the policy.

setCpuSpeed :: Node -> Double -> Node #

Set the CPU speed

setMigrationTags :: Node -> Set String -> Node #

Set migration tags

setRecvMigrationTags :: Node -> Set String -> Node #

Set the migration tags a node is able to receive

setLocationTags :: Node -> Set String -> Node #

Set the location tags

setHypervisor :: Node -> Hypervisor -> Node #

Sets the hypervisor attribute.

Tag maps

addTags :: Ord k => Map k Int -> [k] -> Map k Int #

Add multiple values.

delTags :: Ord k => Map k Int -> [k] -> Map k Int #

Remove multiple value.

rejectAddTags :: TagMap -> [String] -> Bool #

Check if we can add a list of tags to a tagmap.

Diagnostic commands

getPolicyHealth :: Node -> OpResult () #

For a node diagnose whether it conforms with all policies. The type is chosen to represent that of a no-op node operation.

Instance (re)location

removePri :: Node -> Instance -> Node #

Removes a primary instance.

removeSec :: Node -> Instance -> Node #

Removes a secondary instance.

addPri :: Node -> Instance -> OpResult Node #

Adds a primary instance (basic version).

addPriEx #

Arguments

:: Bool

Whether to override the N+1 and other soft checks, useful if we come from a worse status (e.g. offline). If this is True, forthcoming instances may exceed available Node resources.

-> Node

The target node

-> Instance

The instance to add

-> OpResult Node

The result of the operation, either the new version of the node or a failure mode

Adds a primary instance (extended version).

addSec :: Node -> Instance -> Ndx -> OpResult Node #

Adds a secondary instance (basic version).

addSecEx :: Bool -> Node -> Instance -> Ndx -> OpResult Node #

Adds a secondary instance (extended version).

addSecExEx :: Bool -> Bool -> Node -> Instance -> Ndx -> OpResult Node #

Adds a secondary instance (doubly extended version). The first parameter tells addSecExEx to ignore disks completly. There is only one legitimate use case for this, and this is failing over a DRBD instance where the primary node is offline (and hence will become the secondary afterwards).

checkMigration :: Node -> Node -> OpResult () #

Predicate on whether migration is supported between two nodes.

Stats

availDisk :: Node -> Int #

Computes the amount of available disk on a given node.

availMem :: Node -> Int #

Computes the amount of available memory on a given node. Compared to unallocatedMem, this takes into account also memory reserved for secondary instances. NOTE: In case of memory overcommitment, there would be also an additional soft limit based on RAM size dedicated for instances and sum of state-of-record instance sizes (iMem): (tMem - nMem)*overcommit_ratio - iMem

missingMem :: Node -> Int #

Computes the amount of missing memory on the node. NOTE: This formula uses free memory for calculations as opposed to used_memory in the definition, that's why it is the inverse. Explanations for missing memory (+) positive, (-) negative: (+) instances are using more memory that state-of-record - on KVM this might be due to the overhead per qemu process - on Xen manually upsized domains (xen mem-set) (+) on KVM non-qemu processes might be using more memory than what is reserved for node (no isolation) (-) on KVM qemu processes allocate memory on demand, thus an instance grows over its lifetime until it reaches state-of-record (+overhead) (-) on KVM KSM might be active (-) on Xen manually downsized domains (xen mem-set)

unallocatedMem :: Node -> Int #

Computes the guaranteed free memory, that is the minimum of what is reported by the node (available bytes) and our calculation based on instance sizes (our records), thus considering missing memory. NOTE 1: During placement simulations, the recorded memory changes, as instances are added/removed from the node, thus we have to calculate the missingMem (correction) before altering state-of-record and then use that correction to estimate state-of-world memory usage _after_ the placements are done rather than doing min(record, world). NOTE 2: This is still only an approximation on KVM. As we shuffle instances during the simulation we are considering their state-of-record size, but in the real world the moves would shuffle parts of missing memory as well. Unfortunately as long as we don't have a more finegrained model that can better explain missing memory (split down based on root causes), we can't do better. NOTE 3: This is a hard limit based on available bytes and our bookkeeping. In case of memory overcommitment, both recordedFreeMem and reportedFreeMem would be extended by swap size on KVM or baloon size on Xen (their nominal and reported values).

recordedFreeMem :: Node -> Int #

Computes state-of-record free memory on the node. | TODO: Redefine this for memory overcommitment.

availCpu :: Node -> Int #

Computes the amount of available memory on a given node.

iDsk :: Node -> Int #

Computes the amount of used disk on a given node.

conflictingPrimaries :: Node -> Int #

Check how many primary instances have conflicting tags. The algorithm to compute this is to sum the count of all tags, then subtract the size of the tag map (since each tag has at least one, non-conflicting instance); this is equivalent to summing the values in the tag map minus one.

Generate OpCodes

genPowerOnOpCodes :: MonadFail m => [Node] -> m [OpCode] #

Generate OpCode for powering on a list of nodes

genPowerOffOpCodes :: MonadFail m => [Node] -> m [OpCode] #

Generate OpCodes for powering off a list of nodes

genAddTagsOpCode :: Node -> [String] -> OpCode #

Generate OpCodes for adding tags to a node

Formatting

defaultFields :: [String] #

Constant holding the fields we're displaying by default.

showHeader :: String -> (String, Bool) #

Returns the header and numeric propery of a field.

showField #

Arguments

:: Node

Node which we're querying

-> String

Field name

-> String

Field value as string

Return a field for a given node.

list :: [String] -> Node -> [String] #

String converter for the node list functionality.

Misc stuff

type AssocList = [(Ndx, Node)] #

A simple name for the int, node association list.

noSecondary :: Ndx #

Constant node index for a non-moveable instance.

computeGroups :: [Node] -> [(Gdx, [Node])] #

Split a list of nodes into a list of (node group UUID, list of associated nodes).

mkNodeGraph :: List -> List -> Maybe Graph #

Transform a Node + Instance list into a NodeGraph type. Returns Nothing if the node list is empty.

mkRebootNodeGraph :: List -> List -> List -> Maybe Graph #

Transform a Nodes + Instances into a NodeGraph with all reboot exclusions. This includes edges between nodes that are the primary nodes of instances that have the same secondary node. Nodes not in the node list will not be part of the graph, but they are still considered for the edges arising from two instances having the same secondary node. Return Nothing if the node list is empty.

haveExclStorage :: List -> Bool #

Is exclusive storage enabled on any node?