ganeti-3.1: Cluster-based virtualization management software
Safe HaskellSafe-Inferred
LanguageHaskell2010

Ganeti.HTools.Cluster

Description

Implementation of cluster-wide logic.

This module holds all pure cluster-logic; I/O related functionality goes into the Main module for the individual binaries.

Synopsis

Types

data AllocDetails #

Allocation details for an instance, specifying required number of nodes, and an optional group (name) to allocate to

Constructors

AllocDetails Int (Maybe String) 

Instances

Instances details
Show AllocDetails # 
Instance details

Defined in Ganeti.HTools.Cluster

Methods

showsPrec :: Int -> AllocDetails -> ShowS

show :: AllocDetails -> String

showList :: [AllocDetails] -> ShowS

data Table #

The complete state for the balancing solution.

Constructors

Table List List Score [Placement] 

Instances

Instances details
Show Table # 
Instance details

Defined in Ganeti.HTools.Cluster

Methods

showsPrec :: Int -> Table -> ShowS

show :: Table -> String

showList :: [Table] -> ShowS

data CStats #

Cluster statistics data type.

Constructors

CStats 

Fields

  • csFmem :: Integer

    Cluster free mem

  • csFdsk :: Integer

    Cluster free disk

  • csFspn :: Integer

    Cluster free spindles

  • csAmem :: Integer

    Cluster allocatable mem

  • csAdsk :: Integer

    Cluster allocatable disk

  • csAcpu :: Integer

    Cluster allocatable cpus

  • csMmem :: Integer

    Max node allocatable mem

  • csMdsk :: Integer

    Max node allocatable disk

  • csMcpu :: Integer

    Max node allocatable cpu

  • csImem :: Integer

    Instance used mem

  • csIdsk :: Integer

    Instance used disk

  • csIspn :: Integer

    Instance used spindles

  • csIcpu :: Integer

    Instance used cpu

  • csTmem :: Double

    Cluster total mem

  • csTdsk :: Double

    Cluster total disk

  • csTspn :: Double

    Cluster total spindles

  • csTcpu :: Double

    Cluster total cpus

  • csVcpu :: Integer

    Cluster total virtual cpus

  • csNcpu :: Double

    Equivalent to csIcpu but in terms of physical CPUs, i.e. normalised used phys CPUs

  • csXmem :: Integer

    Unnacounted for mem

  • csNmem :: Integer

    Node own memory

  • csScore :: Score

    The cluster score

  • csNinst :: Int

    The total number of instances

Instances

Instances details
Show CStats # 
Instance details

Defined in Ganeti.HTools.Cluster

Methods

showsPrec :: Int -> CStats -> ShowS

show :: CStats -> String

showList :: [CStats] -> ShowS

type AllocNodes = Either [Ndx] [(Ndx, [Ndx])] #

A type denoting the valid allocation mode/pairs.

For a one-node allocation, this will be a Left [Ndx], whereas for a two-node allocation, this will be a Right [(Ndx, [Ndx])]. In the latter case, the list is basically an association list, grouped by primary node and holding the potential secondary nodes in the sub-list.

type AllocResult = (FailStats, List, List, [Instance], [CStats]) #

Allocation results, as used in iterateAlloc and tieredAlloc.

type AllocMethod #

Arguments

 = List

Node list

-> List

Instance list

-> Maybe Int

Optional allocation limit

-> Instance

Instance spec for allocation

-> AllocNodes

Which nodes we should allocate on

-> [Instance]

Allocated instances

-> [CStats]

Running cluster stats

-> Result AllocResult

Allocation result

A simple type for allocation functions.

type GenericAllocSolutionList a = [(Instance, GenericAllocSolution a)] #

Type alias for easier handling.

Generic functions

totalResources :: List -> CStats #

Compute the total free disk and memory in the cluster.

computeAllocationDelta :: CStats -> CStats -> AllocStats #

Compute the delta between two cluster state.

This is used when doing allocations, to understand better the available cluster resources. The return value is a triple of the current used values, the delta that was still allocated, and what was left unallocated.

hasRequiredNetworks :: Group -> Instance -> Bool #

Determines if a group is connected to the networks required by the | instance.

First phase functions

computeBadItems :: List -> List -> ([Node], [Instance]) #

Computes the pair of bad nodes and instances.

The bad node list is computed via a simple verifyN1 check, and the bad instance list is the list of primary and secondary instances of those nodes.

Second phase functions

printSolutionLine #

Arguments

:: List

The node list

-> List

The instance list

-> Int

Maximum node name length

-> Int

Maximum instance name length

-> Placement

The current placement

-> Int

The index of the placement in the solution

-> (String, [String]) 

Converts a placement to string format.

formatCmds :: [JobSet] -> String #

Given a list of commands, prefix them with gnt-instance and also beautify the display a little.

involvedNodes #

Arguments

:: List

Instance list, used for retrieving the instance from its index; note that this must be the original instance list, so that we can retrieve the old nodes

-> Placement

The placement we're investigating, containing the new nodes and instance index

-> [Ndx]

Resulting list of node indices

Return the instance and involved nodes in an instance move.

Note that the output list length can vary, and is not required nor guaranteed to be of any specific length.

getMoves :: (Table, Table) -> [MoveJob] #

From two adjacent cluster tables get the list of moves that transitions from to the other

splitJobs :: [MoveJob] -> [JobSet] #

Break a list of moves into independent groups. Note that this will reverse the order of jobs.

Display functions

printNodes :: List -> [String] -> String #

Print the node list.

printInsts :: List -> List -> String #

Print the instance list.

Balacing functions

doNextBalance #

Arguments

:: Table

The starting table

-> Int

Remaining length

-> Score

Score at which to stop

-> Bool

The resulting table and commands

Check if we are allowed to go deeper in the balancing.

tryBalance #

Arguments

:: AlgorithmOptions

Algorithmic options for balancing

-> Table

The starting table

-> Maybe Table

The resulting table and commands

Run a balance move.

iMoveToJob #

Arguments

:: List

The node list; only used for node names, so any version is good (before or after the operation)

-> List

The instance list; also used for names only

-> Idx

The index of the instance being moved

-> IMove

The actual move to be described

-> [OpCode]

The list of opcodes equivalent to the given move

Convert a placement into a list of OpCodes (basically a job).

IAllocator functions

genAllocNodes #

Arguments

:: AlgorithmOptions

algorithmic options to honor

-> List

Group list

-> List

The node map

-> Int

The number of nodes required

-> Bool

Whether to drop or not unallocable nodes

-> Result AllocNodes

The (monadic) result

Generate the valid node allocation singles or pairs for a new instance.

tryAlloc #

Arguments

:: MonadFail m 
=> AlgorithmOptions 
-> List

The node list

-> List

The instance list

-> Instance

The instance to allocate

-> AllocNodes

The allocation targets

-> m AllocSolution

Possible solution list

Try to allocate an instance on the cluster.

tryGroupAlloc #

Arguments

:: AlgorithmOptions 
-> List

The group list

-> List

The node list

-> List

The instance list

-> String

The allocation group (name)

-> Instance

The instance to allocate

-> Int

Required number of nodes

-> Result AllocSolution

Solution

Try to allocate an instance to a group.

tryMGAlloc #

Arguments

:: AlgorithmOptions 
-> List

The group list

-> List

The node list

-> List

The instance list

-> Instance

The instance to allocate

-> Int

Required number of nodes

-> Result AllocSolution

Possible solution list

Try to allocate an instance on a multi-group cluster.

filterMGResults :: [(Group, Result (GenericAllocSolution a))] -> [(Group, GenericAllocSolution a)] #

From a list of possibly bad and possibly empty solutions, filter only the groups with a valid result. Note that the result will be reversed compared to the original list.

sortMGResults :: Ord a => [(Group, GenericAllocSolution a)] -> [(Group, GenericAllocSolution a)] #

Sort multigroup results based on policy and score.

tryChangeGroup #

Arguments

:: AlgorithmOptions 
-> List

The cluster groups

-> List

The node list (cluster-wide)

-> List

Instance list (cluster-wide)

-> [Gdx]

Target groups; if empty, any groups not being evacuated

-> [Idx]

List of instance (indices) to be evacuated

-> Result (List, List, EvacSolution) 

Change-group IAllocator mode main function.

This is very similar to tryNodeEvac, the only difference is that we don't choose as target group the current instance group, but instead:

  1. at the start of the function, we compute which are the target groups; either no groups were passed in, in which case we choose all groups out of which we don't evacuate instance, or there were some groups passed, in which case we use those
  2. for each instance, we use findBestAllocGroup to choose the best group to hold the instance, and then we do what tryNodeEvac does, except for this group instead of the current instance group.

Note that the correct behaviour of this function relies on the function nodeEvacInstance to be able to do correctly both intra-group and inter-group moves when passed the ChangeAll mode.

allocList #

Arguments

:: AlgorithmOptions 
-> List

The group list

-> List

The node list

-> List

The instance list

-> [(Instance, AllocDetails)]

The instance to allocate

-> AllocSolutionList

Possible solution list

-> Result (List, List, AllocSolutionList)

The final solution list

Try to allocate a list of instances on a multi-group cluster.

Allocation functions

iterateAlloc :: AlgorithmOptions -> AllocMethod #

A speed-up version of iterateAllocSmallStep.

tieredAlloc :: AlgorithmOptions -> AllocMethod #

Tiered allocation method.

This places instances on the cluster, and decreases the spec until we can allocate again. The result will be a list of decreasing instance specs.

Node group functions

instanceGroup :: List -> Instance -> Result Gdx #

Computes the group of an instance.

findSplitInstances :: List -> List -> [Instance] #

Compute the list of badly allocated instances (split across node groups).