ganeti

Safe HaskellNone

Ganeti.HTools.Node

Contents

Description

Module describing a node.

All updates are functional (copy-based) and return a new node with updated value.

Synopsis

Type declarations

data Node Source #

The node type.

Constructors

Node 

Fields

  • name :: String

    The node name

  • alias :: String

    The shortened name (for display purposes)

  • tMem :: Double

    Total memory (MiB) (state-of-world)

  • nMem :: Int

    Node memory (MiB) (state-of-record)

  • iMem :: Int

    Instance memory (MiB) (state-of-record)

  • fMem :: Int

    Free memory (MiB) (state-of-world)

  • fMemForth :: Int

    Free memory (MiB) including forthcoming instances TODO: Use state of record calculations for forthcoming instances (see unallocatedMem)

  • xMem :: Int

    Unaccounted memory (MiB)

  • tDsk :: Double

    Total disk space (MiB)

  • fDsk :: Int

    Free disk space (MiB)

  • fDskForth :: Int

    Free disk space (MiB) including forthcoming instances

  • tCpu :: Double

    Total CPU count

  • tCpuSpeed :: Double

    Relative CPU speed

  • nCpu :: Int

    VCPUs used by the node OS

  • uCpu :: Int

    Used VCPU count

  • uCpuForth :: Int

    Used VCPU count including forthcoming instances

  • tSpindles :: Int

    Node spindles (spindle_count node parameter, or actual spindles, see note below)

  • fSpindles :: Int

    Free spindles (see note below)

  • fSpindlesForth :: Int

    Free spindles (see note below) including forthcoming instances

  • pList :: [Idx]

    List of primary instance indices

  • pListForth :: [Idx]

    List of primary instance indices including forthcoming instances

  • sList :: [Idx]

    List of secondary instance indices

  • sListForth :: [Idx]

    List of secondary instance indices including forthcoming instances

  • idx :: Ndx

    Internal index for book-keeping

  • peers :: PeerMap

    Pnode to instance mapping

  • failN1 :: Bool

    Whether the node has failed n1

  • failN1Forth :: Bool

    Whether the node has failed n1, including forthcoming instances

  • rMem :: Int

    Maximum memory needed for failover by primaries of this node

  • rMemForth :: Int

    Maximum memory needed for failover by primaries of this node, including forthcoming instances

  • pMem :: Double

    Percent of free memory

  • pMemForth :: Double

    Percent of free memory including forthcoming instances

  • pDsk :: Double

    Percent of free disk

  • pDskForth :: Double

    Percent of free disk including forthcoming instances

  • pRem :: Double

    Percent of reserved memory

  • pRemForth :: Double

    Percent of reserved memory including forthcoming instances

  • pCpu :: Double

    Ratio of virtual to physical CPUs

  • pCpuForth :: Double

    Ratio of virtual to physical CPUs including forthcoming instances

  • mDsk :: Double

    Minimum free disk ratio

  • loDsk :: Int

    Autocomputed from mDsk low disk threshold

  • hiCpu :: Int

    Autocomputed from mCpu high cpu threshold

  • hiSpindles :: Double

    Limit auto-computed from policy spindle_ratio and the node spindle count (see note below)

  • instSpindles :: Double

    Spindles used by instances (see note below)

  • instSpindlesForth :: Double

    Spindles used by instances (see note below) including forthcoming instances

  • offline :: Bool

    Whether the node should not be used for allocations and skipped from score computations

  • isMaster :: Bool

    Whether the node is the master node

  • nTags :: [String]

    The node tags for this node

  • utilPool :: DynUtil

    Total utilisation capacity

  • utilLoad :: DynUtil

    Sum of instance utilisation

  • utilLoadForth :: DynUtil

    Sum of instance utilisation, including forthcoming instances

  • pTags :: TagMap

    Primary instance exclusion tags and their count, including forthcoming instances

  • group :: Gdx

    The node's group (index)

  • iPolicy :: IPolicy

    The instance policy (of the node's group)

  • exclStorage :: Bool

    Effective value of exclusive_storage

  • migTags :: Set String

    migration-relevant tags

  • rmigTags :: Set String

    migration tags able to receive

  • locationTags :: Set String

    common-failure domains the node belongs to

  • locationScore :: Int

    Sum of instance location and desired location scores

  • instanceMap :: Map (String, String) Int

    Number of instances with each exclusion/location tags pair

  • hypervisor :: Maybe Hypervisor

    Active hypervisor on the node

Instances
Eq Node # 
Instance details

Defined in Ganeti.HTools.Node

Methods

(==) :: Node -> Node -> Bool

(/=) :: Node -> Node -> Bool

Show Node # 
Instance details

Defined in Ganeti.HTools.Node

Methods

showsPrec :: Int -> Node -> ShowS

show :: Node -> String

showList :: [Node] -> ShowS

Element Node # 
Instance details

Defined in Ganeti.HTools.Node

Methods

nameOf :: Node -> String Source #

allNames :: Node -> [String] Source #

idxOf :: Node -> Int Source #

setAlias :: Node -> String -> Node Source #

computeAlias :: String -> Node -> Node Source #

setIdx :: Node -> Int -> Node Source #

Arbitrary Node 
Instance details

Defined in Test.Ganeti.HTools.Node

Methods

arbitrary :: Gen Node

shrink :: Node -> [Node]

pCpuEff :: Node -> Double Source #

Derived parameter: ratio of virutal to physical CPUs, weighted by CPU speed.

pCpuEffForth :: Node -> Double Source #

Derived parameter: ratio of virutal to physical CPUs, weighted by CPU speed and taking forthcoming instances into account.

type AssocList = [(Ndx, Node)] Source #

A simple name for the int, node association list.

type List = Container Node Source #

A simple name for a node map.

noSecondary :: Ndx Source #

Constant node index for a non-moveable instance.

Helper functions

addTags :: Ord k => Map k Int -> [k] -> Map k Int Source #

Add multiple values.

delTags :: Ord k => Map k Int -> [k] -> Map k Int Source #

Remove multiple value.

rejectAddTags :: TagMap -> [String] -> Bool Source #

Check if we can add a list of tags to a tagmap.

conflictingPrimaries :: Node -> Int Source #

Check how many primary instances have conflicting tags. The algorithm to compute this is to sum the count of all tags, then subtract the size of the tag map (since each tag has at least one, non-conflicting instance); this is equivalent to summing the values in the tag map minus one.

haveExclStorage :: List -> Bool Source #

Is exclusive storage enabled on any node?

Initialization functions

create :: String -> Double -> Int -> Int -> Double -> Int -> Double -> Int -> Bool -> Int -> Int -> Gdx -> Bool -> Node Source #

Create a new node.

The index and the peers maps are empty, and will be need to be update later via the setIdx and buildPeers functions.

setIdx :: Node -> Ndx -> Node Source #

Changes the index.

This is used only during the building of the data structures.

setAlias :: Node -> String -> Node Source #

Changes the alias.

This is used only during the building of the data structures.

setOffline :: Node -> Bool -> Node Source #

Sets the offline attribute.

setMaster :: Node -> Bool -> Node Source #

Sets the master attribute

setNodeTags :: Node -> [String] -> Node Source #

Sets the node tags attribute

setMigrationTags :: Node -> Set String -> Node Source #

Set migration tags

setRecvMigrationTags :: Node -> Set String -> Node Source #

Set the migration tags a node is able to receive

setLocationTags :: Node -> Set String -> Node Source #

Set the location tags

setHypervisor :: Node -> Hypervisor -> Node Source #

Sets the hypervisor attribute.

setMdsk :: Node -> Double -> Node Source #

Sets the max disk usage ratio.

setMcpu :: Node -> Double -> Node Source #

Sets the max cpu usage ratio. This will update the node's ipolicy, losing sharing (but it should be a seldomly done operation).

setPolicy :: IPolicy -> Node -> Node Source #

Sets the policy.

buildPeers :: Node -> List -> Node Source #

Builds the peer map for a given node.

setPri :: Node -> Instance -> Node Source #

Assigns an instance to a node as primary and update the used VCPU count, utilisation data, tags map and desired location score.

setSec :: Node -> Instance -> Node Source #

Assigns an instance to a node as secondary and updates disk utilisation.

Diagnostic functions

getPolicyHealth :: Node -> OpResult () Source #

For a node diagnose whether it conforms with all policies. The type is chosen to represent that of a no-op node operation.

Update functions

setCpuSpeed :: Node -> Double -> Node Source #

Set the CPU speed

removePri :: Node -> Instance -> Node Source #

Removes a primary instance.

removeSec :: Node -> Instance -> Node Source #

Removes a secondary instance.

addPri :: Node -> Instance -> OpResult Node Source #

Adds a primary instance (basic version).

addPriEx Source #

Arguments

:: Bool

Whether to override the N+1 and other soft checks, useful if we come from a worse status (e.g. offline). If this is True, forthcoming instances may exceed available Node resources.

-> Node

The target node

-> Instance

The instance to add

-> OpResult Node

The result of the operation, either the new version of the node or a failure mode

Adds a primary instance (extended version).

addSec :: Node -> Instance -> Ndx -> OpResult Node Source #

Adds a secondary instance (basic version).

addSecEx :: Bool -> Node -> Instance -> Ndx -> OpResult Node Source #

Adds a secondary instance (extended version).

addSecExEx :: Bool -> Bool -> Node -> Instance -> Ndx -> OpResult Node Source #

Adds a secondary instance (doubly extended version). The first parameter tells addSecExEx to ignore disks completly. There is only one legitimate use case for this, and this is failing over a DRBD instance where the primary node is offline (and hence will become the secondary afterwards).

checkMigration :: Node -> Node -> OpResult () Source #

Predicate on whether migration is supported between two nodes.

Stats functions

availDisk :: Node -> Int Source #

Computes the amount of available disk on a given node.

iDsk :: Node -> Int Source #

Computes the amount of used disk on a given node.

recordedFreeMem :: Node -> Int Source #

Computes state-of-record free memory on the node. | TODO: Redefine this for memory overcommitment.

missingMem :: Node -> Int Source #

Computes the amount of missing memory on the node. NOTE: This formula uses free memory for calculations as opposed to used_memory in the definition, that's why it is the inverse. Explanations for missing memory (+) positive, (-) negative: (+) instances are using more memory that state-of-record - on KVM this might be due to the overhead per qemu process - on Xen manually upsized domains (xen mem-set) (+) on KVM non-qemu processes might be using more memory than what is reserved for node (no isolation) (-) on KVM qemu processes allocate memory on demand, thus an instance grows over its lifetime until it reaches state-of-record (+overhead) (-) on KVM KSM might be active (-) on Xen manually downsized domains (xen mem-set)

unallocatedMem :: Node -> Int Source #

Computes the guaranteed free memory, that is the minimum of what is reported by the node (available bytes) and our calculation based on instance sizes (our records), thus considering missing memory. NOTE 1: During placement simulations, the recorded memory changes, as instances are added/removed from the node, thus we have to calculate the missingMem (correction) before altering state-of-record and then use that correction to estimate state-of-world memory usage _after_ the placements are done rather than doing min(record, world). NOTE 2: This is still only an approximation on KVM. As we shuffle instances during the simulation we are considering their state-of-record size, but in the real world the moves would shuffle parts of missing memory as well. Unfortunately as long as we don't have a more finegrained model that can better explain missing memory (split down based on root causes), we can't do better. NOTE 3: This is a hard limit based on available bytes and our bookkeeping. In case of memory overcommitment, both recordedFreeMem and reportedFreeMem would be extended by swap size on KVM or baloon size on Xen (their nominal and reported values).

availMem :: Node -> Int Source #

Computes the amount of available memory on a given node. Compared to unallocatedMem, this takes into account also memory reserved for secondary instances. NOTE: In case of memory overcommitment, there would be also an additional soft limit based on RAM size dedicated for instances and sum of state-of-record instance sizes (iMem): (tMem - nMem)*overcommit_ratio - iMem

availCpu :: Node -> Int Source #

Computes the amount of available memory on a given node.

Node graph functions

Making of a Graph from a node/instance list

mkNodeGraph :: List -> List -> Maybe Graph Source #

Transform a Node + Instance list into a NodeGraph type. Returns Nothing if the node list is empty.

mkRebootNodeGraph :: List -> List -> List -> Maybe Graph Source #

Transform a Nodes + Instances into a NodeGraph with all reboot exclusions. This includes edges between nodes that are the primary nodes of instances that have the same secondary node. Nodes not in the node list will not be part of the graph, but they are still considered for the edges arising from two instances having the same secondary node. Return Nothing if the node list is empty.

Display functions

showField Source #

Arguments

:: Node

Node which we're querying

-> String

Field name

-> String

Field value as string

Return a field for a given node.

showHeader :: String -> (String, Bool) Source #

Returns the header and numeric propery of a field.

list :: [String] -> Node -> [String] Source #

String converter for the node list functionality.

genPowerOnOpCodes :: MonadFail m => [Node] -> m [OpCode] Source #

Generate OpCode for powering on a list of nodes

genPowerOffOpCodes :: MonadFail m => [Node] -> m [OpCode] Source #

Generate OpCodes for powering off a list of nodes

genAddTagsOpCode :: Node -> [String] -> OpCode Source #

Generate OpCodes for adding tags to a node

defaultFields :: [String] Source #

Constant holding the fields we're displaying by default.

computeGroups :: [Node] -> [(Gdx, [Node])] Source #

Split a list of nodes into a list of (node group UUID, list of associated nodes).