leaspy.samplers.gibbs

Attributes

Classes

GibbsSamplerMixin

Mixin class for Gibbs samplers (individual and population).

AbstractPopulationGibbsSampler

Abstract class for all Gibbs samplers for population variables.

PopulationGibbsSampler

Gibbs sampler for population variables.

PopulationFastGibbsSampler

Fast Gibbs sampler for population variables.

PopulationMetropolisHastingsSampler

Metropolis-Hastings sampler for population variables.

IndividualGibbsSampler

Gibbs sampler for individual variables.

Module Contents

IteratorIndicesType
class GibbsSamplerMixin(name, shape, *, scale, random_order_dimension=True, mean_acceptation_rate_target_bounds=(0.2, 0.4), adaptive_std_factor=0.1, **base_sampler_kws)[source]

Mixin class for Gibbs samplers (individual and population).

This class contains common logic for all Gibbs samplers.

Parameters:
namestr

The name of the random variable to sample.

shapetuple of int

The shape of the random variable to sample.

scalefloat > 0 or torch.FloatTensor > 0

An approximate scale for the variable. It will be used to scale the initial adaptive std-dev used in sampler. An extra factor will be applied on top of this scale :

Note: If a torch.Tensor is passed, its shape must be compatible with the shape of the variable.

random_order_dimensionbool (default True)

Whether to randomize the order of dimensions during the sampling loop. This is applied only to population variables as group sampling is used for individual variables. Article gives a reason on why we should activate this flag.

mean_acceptation_rate_target_boundstuple [lower_bound: float, upper_bound: float] with 0 < lower_bound < upper_bound < 1

Bounds on the mean acceptation rate. If the rate is outside this range, the adaptive std-dev will be updated to maintain the target acceptance rate. (e.g: ~30%).

adaptive_std_factorfloat in (0, 1)

The factor by which the sampler’s std-dev is adapted:

  • If acceptance rate is too low → decrease by 1 - factor

  • If too high → increase by 1 + factor

**base_sampler_kws

Additional keyword arguments passed to AbstractSampler init method. For example, you may pass acceptation_history_length.

Attributes:
In addition to the attributes present in AbstractSampler
stdtorch.Tensor

Adaptive std-dev of variable

Raises:
LeaspyInputError
Parameters:
STD_SCALE_FACTOR: ClassVar[float]
scale
std
property shape_adapted_std: tuple[int, Ellipsis]
Abstractmethod:

Return type:

tuple[int, Ellipsis]

Shape of adaptative variance.

property shape_acceptation: tuple[int, Ellipsis]

Shape of adaptative variance.

Return type:

tuple[int, Ellipsis]

validate_scale(scale)[source]

Validate the user provided scale if it is a float or torch.Tensor.

The scale must be positive. If the input is multidimensional, all components must be positive.

Parameters:
scalefloat or torch.Tensor

The scale to be validated.

Returns:
torch.Tensor

Valid scale.

Raises:
LeaspyInputError

If the scale contains any non-positive value.

Parameters:

scale (Union[float, Tensor])

Return type:

Tensor

class AbstractPopulationGibbsSampler(name, shape, *, scale, random_order_dimension=True, mean_acceptation_rate_target_bounds=(0.2, 0.4), adaptive_std_factor=0.1, **base_sampler_kws)[source]

Bases: GibbsSamplerMixin, leaspy.samplers.base.AbstractPopulationSampler

Abstract class for all Gibbs samplers for population variables.

Parameters:
namestr

The name of the random variable to sample.

shapetuple of int

The shape of the random variable to sample.

scalefloat > 0 or torch.FloatTensor > 0

An approximate scale for the variable. It will be used to scale the initial adaptive std-dev used in sampler. An extra factor will be applied on top of this scale :

Note: If a torch.Tensor is passed, its shape must be compatible with the shape of the variable.

random_order_dimensionbool (default True)

Whether to randomize the order of dimensions during the sampling loop. This is applied only to population variables as group sampling is used for individual variables. Article gives a reason on why we should activate this flag.

mean_acceptation_rate_target_boundstuple [lower_bound: float, upper_bound: float] with 0 < lower_bound < upper_bound < 1

Bounds on the mean acceptation rate. If the rate is outside this range, the adaptive std-dev will be updated to maintain the target acceptance rate. (e.g: ~30%).

adaptive_std_factorfloat in (0, 1)

The factor by which the sampler’s std-dev is adapted:

  • If acceptance rate is too low → decrease by 1 - factor

  • If too high → increase by 1 + factor

**base_sampler_kws

Additional keyword arguments passed to AbstractSampler init method. For example, you may pass acceptation_history_length.

Parameters:
STD_SCALE_FACTOR = 0.01
validate_scale(scale)[source]

Validate the user provided scale in float or torch.Tensor.

If necessary, the scale is cast to a torch.Tensor.

Parameters:
scalefloat or torch.Tensor

The scale to be validated.

Returns:
torch.Tensor

Valid scale.

Parameters:

scale (Union[float, Tensor])

Return type:

Tensor

sample(state, *, temperature_inv)[source]

Apply a Metropolis-Hastings (MH) sampling step for each dimension of the population variable.

For each dimension (1D or 2D):
  • Compute the current attachment and regularity terms.

  • Propose a new value for the dimension.

  • Recompute the attachment and regularity based on the new value.

  • Apply a Metropolis-Hastings step to accept or reject the proposal.

Parameters:
stateState

Object containing values for all model variables, including latent variables

temperature_invfloat > 0

The temperature to use.

Parameters:
Return type:

None

class PopulationGibbsSampler(name, shape, *, scale, random_order_dimension=True, mean_acceptation_rate_target_bounds=(0.2, 0.4), adaptive_std_factor=0.1, **base_sampler_kws)[source]

Bases: AbstractPopulationGibbsSampler

Gibbs sampler for population variables.

The sampling is done iteratively for all coordinate values.

Parameters:
namestr

The name of the random variable to sample.

shapetuple of int

The shape of the random variable to sample.

scalefloat > 0 or torch.FloatTensor > 0

An approximate scale for the variable. It will be used to scale the initial adaptive std-dev used in sampler. An extra factor will be applied on top of this scale :

Note: If a torch.Tensor is passed, its shape must be compatible with the shape of the variable.

random_order_dimensionbool (default True)

Whether to randomize the order of dimensions during the sampling loop. This is applied only to population variables as group sampling is used for individual variables. Article gives a reason on why we should activate this flag.

mean_acceptation_rate_target_boundstuple [lower_bound: float, upper_bound: float] with 0 < lower_bound < upper_bound < 1

Bounds on the mean acceptation rate. If the rate is outside this range, the adaptive std-dev will be updated to maintain the target acceptance rate. (e.g: ~30%).

adaptive_std_factorfloat in (0, 1)

The factor by which the sampler’s std-dev is adapted:

  • If acceptance rate is too low → decrease by 1 - factor

  • If too high → increase by 1 + factor

**base_sampler_kws

Additional keyword arguments passed to AbstractSampler init method. For example, you may pass acceptation_history_length.

Parameters:
property shape_adapted_std: tuple

Shape of adaptative variance.

Return type:

tuple

class PopulationFastGibbsSampler(name, shape, *, scale, random_order_dimension=True, mean_acceptation_rate_target_bounds=(0.2, 0.4), adaptive_std_factor=0.1, **base_sampler_kws)[source]

Bases: AbstractPopulationGibbsSampler

Fast Gibbs sampler for population variables.

Note

The sampling batches along the dimensions except the first one. This accelerates sampling process for 2D parameters.

Parameters:
namestr

The name of the random variable to sample.

shapetuple of int

The shape of the random variable to sample.

scalefloat > 0 or torch.FloatTensor > 0

An approximate scale for the variable. It will be used to scale the initial adaptive std-dev used in sampler. An extra factor will be applied on top of this scale :

Note: If a torch.Tensor is passed, its shape must be compatible with the shape of the variable.

random_order_dimensionbool (default True)

Whether to randomize the order of dimensions during the sampling loop. This is applied only to population variables as group sampling is used for individual variables. Article gives a reason on why we should activate this flag.

mean_acceptation_rate_target_boundstuple [lower_bound: float, upper_bound: float] with 0 < lower_bound < upper_bound < 1

Bounds on the mean acceptation rate. If the rate is outside this range, the adaptive std-dev will be updated to maintain the target acceptance rate. (e.g: ~30%).

adaptive_std_factorfloat in (0, 1)

The factor by which the sampler’s std-dev is adapted:

  • If acceptance rate is too low → decrease by 1 - factor

  • If too high → increase by 1 + factor

**base_sampler_kws

Additional keyword arguments passed to AbstractSampler init method. For example, you may pass acceptation_history_length.

Parameters:
property shape_adapted_std: tuple

Shape of adaptative variance.

Return type:

tuple

class PopulationMetropolisHastingsSampler(name, shape, *, scale, random_order_dimension=True, mean_acceptation_rate_target_bounds=(0.2, 0.4), adaptive_std_factor=0.1, **base_sampler_kws)[source]

Bases: AbstractPopulationGibbsSampler

Metropolis-Hastings sampler for population variables.

Note

The sampling is implemented for all values at once. This accelerates sampling process but usually requires more iterations.

Parameters:
namestr

The name of the random variable to sample.

shapetuple of int

The shape of the random variable to sample.

scalefloat > 0 or torch.FloatTensor > 0

An approximate scale for the variable. It will be used to scale the initial adaptive std-dev used in sampler. An extra factor will be applied on top of this scale :

Note: If a torch.Tensor is passed, its shape must be compatible with the shape of the variable.

random_order_dimensionbool (default True)

Whether to randomize the order of dimensions during the sampling loop. This is applied only to population variables as group sampling is used for individual variables. Article gives a reason on why we should activate this flag.

mean_acceptation_rate_target_boundstuple [lower_bound: float, upper_bound: float] with 0 < lower_bound < upper_bound < 1

Bounds on the mean acceptation rate. If the rate is outside this range, the adaptive std-dev will be updated to maintain the target acceptance rate. (e.g: ~30%).

adaptive_std_factorfloat in (0, 1)

The factor by which the sampler’s std-dev is adapted:

  • If acceptance rate is too low → decrease by 1 - factor

  • If too high → increase by 1 + factor

**base_sampler_kws

Additional keyword arguments passed to AbstractSampler init method. For example, you may pass acceptation_history_length.

Parameters:
property shape_adapted_std: tuple

Shape of adaptative variance.

Return type:

tuple

class IndividualGibbsSampler(name, shape, *, n_patients, scale, random_order_dimension=True, mean_acceptation_rate_target_bounds=(0.2, 0.4), adaptive_std_factor=0.1, **base_sampler_kws)[source]

Bases: GibbsSamplerMixin, leaspy.samplers.base.AbstractIndividualSampler

Gibbs sampler for individual variables.

Individual variables are handled using a grouped Gibbs sampler. Currently, this is the only sampler implemented for individual variables.

Parameters:
namestr

The name of the random variable to sample.

shapetuple of int

The shape of the random variable to sample.

scalefloat > 0 or torch.FloatTensor > 0

An approximate scale for the variable. It will be used to scale the initial adaptive std-dev used in sampler. An extra factor will be applied on top of this scale :

Note: If a torch.Tensor is passed, its shape must be compatible with the shape of the variable.

random_order_dimensionbool (default True)

Whether to randomize the order of dimensions during the sampling loop. This is applied only to population variables as group sampling is used for individual variables. Article gives a reason on why we should activate this flag.

mean_acceptation_rate_target_boundstuple [lower_bound: float, upper_bound: float] with 0 < lower_bound < upper_bound < 1

Bounds on the mean acceptation rate. If the rate is outside this range, the adaptive std-dev will be updated to maintain the target acceptance rate. (e.g: ~30%).

adaptive_std_factorfloat in (0, 1)

The factor by which the sampler’s std-dev is adapted:

  • If acceptance rate is too low → decrease by 1 - factor

  • If too high → increase by 1 + factor

**base_sampler_kws

Additional keyword arguments passed to AbstractSampler init method. For example, you may pass acceptation_history_length.

Parameters:
STD_SCALE_FACTOR = 0.5
validate_scale(scale)[source]

Validate the provided scale.

Parameters:
scalefloat or torch.Tensor

The scale to be validated.

Returns:
torch.Tensor

Valid scale.

Parameters:

scale (Union[float, Tensor])

Return type:

Tensor

property shape_adapted_std: tuple

Shape of adaptative variance.

Return type:

tuple

sample(state, *, temperature_inv)[source]

Apply a grouped Metropolis-Hastings (MH) sampling step for individual variables.

For each individual variable:
  • Compute the current attachment and regularity.

  • Propose a new value for the variable.

  • Recompute the attachment and regularity.

  • Accept or reject the proposal using the MH criterion, based on the inverse temperature.

Parameters:
stateState

Object containing values for all model variables, including latent variables

temperature_invfloat > 0

The temperature to use.

Parameters:
Return type:

None

Notes

Population variables remain fixed during this step since we are updating individual variables. However:

  • In fit, population variables might have just been updated, and their effects not yet propagated to the model. In this case, model computations should use the current MCMC state (default behavior).

  • In personalization (mode or mean posterior), population variables are not updated. Therefore, the MCMC state should not be used.