leaspy.samplers.gibbs ===================== .. py:module:: leaspy.samplers.gibbs Attributes ---------- .. autoapisummary:: leaspy.samplers.gibbs.IteratorIndicesType Classes ------- .. autoapisummary:: leaspy.samplers.gibbs.GibbsSamplerMixin leaspy.samplers.gibbs.AbstractPopulationGibbsSampler leaspy.samplers.gibbs.PopulationGibbsSampler leaspy.samplers.gibbs.PopulationFastGibbsSampler leaspy.samplers.gibbs.PopulationMetropolisHastingsSampler leaspy.samplers.gibbs.IndividualGibbsSampler Module Contents --------------- .. py:data:: IteratorIndicesType .. py:class:: GibbsSamplerMixin(name, shape, *, scale, random_order_dimension = True, mean_acceptation_rate_target_bounds = (0.2, 0.4), adaptive_std_factor = 0.1, **base_sampler_kws) Mixin class for Gibbs samplers (individual and population). This class contains common logic for all Gibbs samplers. :Parameters: **name** : :obj:`str` The name of the random variable to sample. **shape** : :obj:`tuple` of :obj:`int` The shape of the random variable to sample. **scale** : :obj:`float` > 0 or :class:`torch.FloatTensor` > 0 An approximate scale for the variable. It will be used to scale the initial adaptive std-dev used in sampler. An extra factor will be applied on top of this scale : * 1% for population parameters (:attr:`.AbstractPopulationGibbsSampler.STD_SCALE_FACTOR`) * 50% for individual parameters (:attr:`.IndividualGibbsSampler.STD_SCALE_FACTOR`) Note: If a :class:`torch.Tensor` is passed, its shape must be compatible with the shape of the variable. **random_order_dimension** : :obj:`bool` (default True) Whether to randomize the order of dimensions during the sampling loop. This is applied only to population variables as group sampling is used for individual variables. `Article `_ gives a reason on why we should activate this flag. **mean_acceptation_rate_target_bounds** : :obj:`tuple` [lower_bound: :obj:`float`, upper_bound: :obj:`float`] with 0 < lower_bound < upper_bound < 1 Bounds on the mean acceptation rate. If the rate is outside this range, the adaptive std-dev will be updated to maintain the target acceptance rate. (e.g: ~30%). **adaptive_std_factor** : :obj:`float` in (0, 1) The factor by which the sampler's std-dev is adapted: * If acceptance rate is too low → decrease by ``1 - factor`` * If too high → increase by ``1 + factor`` **\*\*base_sampler_kws** Additional keyword arguments passed to :class:`~leaspy.samplers.AbstractSampler` init method. For example, you may pass `acceptation_history_length`. :Attributes: **In addition to the attributes present in AbstractSampler** .. **std** : :class:`torch.Tensor` Adaptive std-dev of variable :Raises: :exc:`.LeaspyInputError` .. .. !! processed by numpydoc !! .. py:attribute:: STD_SCALE_FACTOR :type: ClassVar[float] .. py:attribute:: scale .. py:attribute:: std .. py:property:: shape_adapted_std :type: tuple[int, Ellipsis] :abstractmethod: Shape of adaptative variance. .. !! processed by numpydoc !! .. py:property:: shape_acceptation :type: tuple[int, Ellipsis] Shape of adaptative variance. .. !! processed by numpydoc !! .. py:method:: validate_scale(scale) Validate the user provided scale if it is a :obj:`float` or :class:`torch.Tensor`. The scale must be positive. If the input is multidimensional, all components must be positive. :Parameters: **scale** : :obj:`float` or :class:`torch.Tensor` The scale to be validated. :Returns: :class:`torch.Tensor` Valid scale. :Raises: :exc:`.LeaspyInputError` If the scale contains any non-positive value. .. !! processed by numpydoc !! .. py:class:: AbstractPopulationGibbsSampler(name, shape, *, scale, random_order_dimension = True, mean_acceptation_rate_target_bounds = (0.2, 0.4), adaptive_std_factor = 0.1, **base_sampler_kws) Bases: :py:obj:`GibbsSamplerMixin`, :py:obj:`leaspy.samplers.base.AbstractPopulationSampler` Abstract class for all Gibbs samplers for population variables. :Parameters: **name** : :obj:`str` The name of the random variable to sample. **shape** : :obj:`tuple` of :obj:`int` The shape of the random variable to sample. **scale** : :obj:`float` > 0 or :class:`torch.FloatTensor` > 0 An approximate scale for the variable. It will be used to scale the initial adaptive std-dev used in sampler. An extra factor will be applied on top of this scale : * 1% for population parameters (:attr:`.AbstractPopulationGibbsSampler.STD_SCALE_FACTOR`) * 50% for individual parameters (:attr:`.IndividualGibbsSampler.STD_SCALE_FACTOR`) Note: If a :class:`torch.Tensor` is passed, its shape must be compatible with the shape of the variable. **random_order_dimension** : :obj:`bool` (default True) Whether to randomize the order of dimensions during the sampling loop. This is applied only to population variables as group sampling is used for individual variables. `Article `_ gives a reason on why we should activate this flag. **mean_acceptation_rate_target_bounds** : :obj:`tuple` [lower_bound: :obj:`float`, upper_bound: :obj:`float`] with 0 < lower_bound < upper_bound < 1 Bounds on the mean acceptation rate. If the rate is outside this range, the adaptive std-dev will be updated to maintain the target acceptance rate. (e.g: ~30%). **adaptive_std_factor** : :obj:`float` in (0, 1) The factor by which the sampler's std-dev is adapted: * If acceptance rate is too low → decrease by ``1 - factor`` * If too high → increase by ``1 + factor`` **\*\*base_sampler_kws** Additional keyword arguments passed to :class:`~leaspy.samplers.AbstractSampler` init method. For example, you may pass `acceptation_history_length`. .. !! processed by numpydoc !! .. py:attribute:: STD_SCALE_FACTOR :value: 0.01 .. py:method:: validate_scale(scale) Validate the user provided scale in :obj:`float` or :class:`torch.Tensor`. If necessary, the scale is cast to a :class:`torch.Tensor`. :Parameters: **scale** : :obj:`float` or :class:`torch.Tensor` The scale to be validated. :Returns: :class:`torch.Tensor` Valid scale. .. !! processed by numpydoc !! .. py:method:: sample(state, *, temperature_inv) Apply a Metropolis-Hastings (MH) sampling step for each dimension of the population variable. For each dimension (1D or 2D): - Compute the current attachment and regularity terms. - Propose a new value for the dimension. - Recompute the attachment and regularity based on the new value. - Apply a Metropolis-Hastings step to accept or reject the proposal. :Parameters: **state** : :class:`.State` Object containing values for all model variables, including latent variables **temperature_inv** : :obj:`float` > 0 The temperature to use. .. !! processed by numpydoc !! .. py:class:: PopulationGibbsSampler(name, shape, *, scale, random_order_dimension = True, mean_acceptation_rate_target_bounds = (0.2, 0.4), adaptive_std_factor = 0.1, **base_sampler_kws) Bases: :py:obj:`AbstractPopulationGibbsSampler` Gibbs sampler for population variables. The sampling is done iteratively for all coordinate values. :Parameters: **name** : :obj:`str` The name of the random variable to sample. **shape** : :obj:`tuple` of :obj:`int` The shape of the random variable to sample. **scale** : :obj:`float` > 0 or :class:`torch.FloatTensor` > 0 An approximate scale for the variable. It will be used to scale the initial adaptive std-dev used in sampler. An extra factor will be applied on top of this scale : * 1% for population parameters (:attr:`.AbstractPopulationGibbsSampler.STD_SCALE_FACTOR`) * 50% for individual parameters (:attr:`.IndividualGibbsSampler.STD_SCALE_FACTOR`) Note: If a :class:`torch.Tensor` is passed, its shape must be compatible with the shape of the variable. **random_order_dimension** : :obj:`bool` (default True) Whether to randomize the order of dimensions during the sampling loop. This is applied only to population variables as group sampling is used for individual variables. `Article `_ gives a reason on why we should activate this flag. **mean_acceptation_rate_target_bounds** : :obj:`tuple` [lower_bound: :obj:`float`, upper_bound: :obj:`float`] with 0 < lower_bound < upper_bound < 1 Bounds on the mean acceptation rate. If the rate is outside this range, the adaptive std-dev will be updated to maintain the target acceptance rate. (e.g: ~30%). **adaptive_std_factor** : :obj:`float` in (0, 1) The factor by which the sampler's std-dev is adapted: * If acceptance rate is too low → decrease by ``1 - factor`` * If too high → increase by ``1 + factor`` **\*\*base_sampler_kws** Additional keyword arguments passed to :class:`~leaspy.samplers.AbstractSampler` init method. For example, you may pass `acceptation_history_length`. .. !! processed by numpydoc !! .. py:property:: shape_adapted_std :type: tuple Shape of adaptative variance. .. !! processed by numpydoc !! .. py:class:: PopulationFastGibbsSampler(name, shape, *, scale, random_order_dimension = True, mean_acceptation_rate_target_bounds = (0.2, 0.4), adaptive_std_factor = 0.1, **base_sampler_kws) Bases: :py:obj:`AbstractPopulationGibbsSampler` Fast Gibbs sampler for population variables. .. note:: The sampling batches along the dimensions except the first one. This accelerates sampling process for 2D parameters. :Parameters: **name** : :obj:`str` The name of the random variable to sample. **shape** : :obj:`tuple` of :obj:`int` The shape of the random variable to sample. **scale** : :obj:`float` > 0 or :class:`torch.FloatTensor` > 0 An approximate scale for the variable. It will be used to scale the initial adaptive std-dev used in sampler. An extra factor will be applied on top of this scale : * 1% for population parameters (:attr:`.AbstractPopulationGibbsSampler.STD_SCALE_FACTOR`) * 50% for individual parameters (:attr:`.IndividualGibbsSampler.STD_SCALE_FACTOR`) Note: If a :class:`torch.Tensor` is passed, its shape must be compatible with the shape of the variable. **random_order_dimension** : :obj:`bool` (default True) Whether to randomize the order of dimensions during the sampling loop. This is applied only to population variables as group sampling is used for individual variables. `Article `_ gives a reason on why we should activate this flag. **mean_acceptation_rate_target_bounds** : :obj:`tuple` [lower_bound: :obj:`float`, upper_bound: :obj:`float`] with 0 < lower_bound < upper_bound < 1 Bounds on the mean acceptation rate. If the rate is outside this range, the adaptive std-dev will be updated to maintain the target acceptance rate. (e.g: ~30%). **adaptive_std_factor** : :obj:`float` in (0, 1) The factor by which the sampler's std-dev is adapted: * If acceptance rate is too low → decrease by ``1 - factor`` * If too high → increase by ``1 + factor`` **\*\*base_sampler_kws** Additional keyword arguments passed to :class:`~leaspy.samplers.AbstractSampler` init method. For example, you may pass `acceptation_history_length`. .. !! processed by numpydoc !! .. py:property:: shape_adapted_std :type: tuple Shape of adaptative variance. .. !! processed by numpydoc !! .. py:class:: PopulationMetropolisHastingsSampler(name, shape, *, scale, random_order_dimension = True, mean_acceptation_rate_target_bounds = (0.2, 0.4), adaptive_std_factor = 0.1, **base_sampler_kws) Bases: :py:obj:`AbstractPopulationGibbsSampler` Metropolis-Hastings sampler for population variables. .. note:: The sampling is implemented for all values at once. This accelerates sampling process but usually requires more iterations. :Parameters: **name** : :obj:`str` The name of the random variable to sample. **shape** : :obj:`tuple` of :obj:`int` The shape of the random variable to sample. **scale** : :obj:`float` > 0 or :class:`torch.FloatTensor` > 0 An approximate scale for the variable. It will be used to scale the initial adaptive std-dev used in sampler. An extra factor will be applied on top of this scale : * 1% for population parameters (:attr:`.AbstractPopulationGibbsSampler.STD_SCALE_FACTOR`) * 50% for individual parameters (:attr:`.IndividualGibbsSampler.STD_SCALE_FACTOR`) Note: If a :class:`torch.Tensor` is passed, its shape must be compatible with the shape of the variable. **random_order_dimension** : :obj:`bool` (default True) Whether to randomize the order of dimensions during the sampling loop. This is applied only to population variables as group sampling is used for individual variables. `Article `_ gives a reason on why we should activate this flag. **mean_acceptation_rate_target_bounds** : :obj:`tuple` [lower_bound: :obj:`float`, upper_bound: :obj:`float`] with 0 < lower_bound < upper_bound < 1 Bounds on the mean acceptation rate. If the rate is outside this range, the adaptive std-dev will be updated to maintain the target acceptance rate. (e.g: ~30%). **adaptive_std_factor** : :obj:`float` in (0, 1) The factor by which the sampler's std-dev is adapted: * If acceptance rate is too low → decrease by ``1 - factor`` * If too high → increase by ``1 + factor`` **\*\*base_sampler_kws** Additional keyword arguments passed to :class:`~leaspy.samplers.AbstractSampler` init method. For example, you may pass `acceptation_history_length`. .. !! processed by numpydoc !! .. py:property:: shape_adapted_std :type: tuple Shape of adaptative variance. .. !! processed by numpydoc !! .. py:class:: IndividualGibbsSampler(name, shape, *, n_patients, scale, random_order_dimension = True, mean_acceptation_rate_target_bounds = (0.2, 0.4), adaptive_std_factor = 0.1, **base_sampler_kws) Bases: :py:obj:`GibbsSamplerMixin`, :py:obj:`leaspy.samplers.base.AbstractIndividualSampler` Gibbs sampler for individual variables. Individual variables are handled using a grouped Gibbs sampler. Currently, this is the only sampler implemented for individual variables. :Parameters: **name** : :obj:`str` The name of the random variable to sample. **shape** : :obj:`tuple` of :obj:`int` The shape of the random variable to sample. **scale** : :obj:`float` > 0 or :class:`torch.FloatTensor` > 0 An approximate scale for the variable. It will be used to scale the initial adaptive std-dev used in sampler. An extra factor will be applied on top of this scale : * 1% for population parameters (:attr:`.AbstractPopulationGibbsSampler.STD_SCALE_FACTOR`) * 50% for individual parameters (:attr:`.IndividualGibbsSampler.STD_SCALE_FACTOR`) Note: If a :class:`torch.Tensor` is passed, its shape must be compatible with the shape of the variable. **random_order_dimension** : :obj:`bool` (default True) Whether to randomize the order of dimensions during the sampling loop. This is applied only to population variables as group sampling is used for individual variables. `Article `_ gives a reason on why we should activate this flag. **mean_acceptation_rate_target_bounds** : :obj:`tuple` [lower_bound: :obj:`float`, upper_bound: :obj:`float`] with 0 < lower_bound < upper_bound < 1 Bounds on the mean acceptation rate. If the rate is outside this range, the adaptive std-dev will be updated to maintain the target acceptance rate. (e.g: ~30%). **adaptive_std_factor** : :obj:`float` in (0, 1) The factor by which the sampler's std-dev is adapted: * If acceptance rate is too low → decrease by ``1 - factor`` * If too high → increase by ``1 + factor`` **\*\*base_sampler_kws** Additional keyword arguments passed to :class:`~leaspy.samplers.AbstractSampler` init method. For example, you may pass `acceptation_history_length`. .. !! processed by numpydoc !! .. py:attribute:: STD_SCALE_FACTOR :value: 0.5 .. py:method:: validate_scale(scale) Validate the provided scale. :Parameters: **scale** : :obj:`float` or :class:`torch.Tensor` The scale to be validated. :Returns: :class:`torch.Tensor` Valid scale. .. !! processed by numpydoc !! .. py:property:: shape_adapted_std :type: tuple Shape of adaptative variance. .. !! processed by numpydoc !! .. py:method:: sample(state, *, temperature_inv) Apply a grouped Metropolis-Hastings (MH) sampling step for individual variables. For each individual variable: - Compute the current attachment and regularity. - Propose a new value for the variable. - Recompute the attachment and regularity. - Accept or reject the proposal using the MH criterion, based on the inverse temperature. :Parameters: **state** : :class:`.State` Object containing values for all model variables, including latent variables **temperature_inv** : :obj:`float` > 0 The temperature to use. .. rubric:: Notes Population variables remain fixed during this step since we are updating individual variables. However: - In fit, population variables might have just been updated, and their effects not yet propagated to the model. In this case, model computations should use the current MCMC state (default behavior). - In personalization (mode or mean posterior), population variables are not updated. Therefore, the MCMC state should not be used. .. !! processed by numpydoc !!