leaspy.io.data ============== .. py:module:: leaspy.io.data Submodules ---------- .. toctree:: :maxdepth: 1 /reference/api/leaspy/io/data/abstract_dataframe_data_reader/index /reference/api/leaspy/io/data/covariate_dataframe_data_reader/index /reference/api/leaspy/io/data/data/index /reference/api/leaspy/io/data/dataset/index /reference/api/leaspy/io/data/event_dataframe_data_reader/index /reference/api/leaspy/io/data/factory/index /reference/api/leaspy/io/data/individual_data/index /reference/api/leaspy/io/data/joint_dataframe_data_reader/index /reference/api/leaspy/io/data/visit_dataframe_data_reader/index Attributes ---------- .. autoapisummary:: leaspy.io.data.DataframeDataReaderFactoryInput Classes ------- .. autoapisummary:: leaspy.io.data.AbstractDataframeDataReader leaspy.io.data.Data leaspy.io.data.Dataset leaspy.io.data.EventDataframeDataReader leaspy.io.data.DataframeDataReaderNames leaspy.io.data.IndividualData leaspy.io.data.JointDataframeDataReader leaspy.io.data.VisitDataframeDataReader Functions --------- .. autoapisummary:: leaspy.io.data.dataframe_data_reader_factory Package Contents ---------------- .. py:class:: AbstractDataframeDataReader Methods to convert :class:`pandas.DataFrame` to `Leaspy`-compliant data containers. :Raises: :exc:`.LeaspyDataInputError` .. .. !! processed by numpydoc !! .. py:attribute:: time_rounding_digits :value: 6 .. py:attribute:: individuals :type: dict[leaspy.utils.typing.IDType, leaspy.io.data.individual_data.IndividualData] .. py:attribute:: iter_to_idx :type: dict[int, leaspy.utils.typing.IDType] .. py:attribute:: n_individuals :type: int :value: 0 .. py:method:: read(df, *, drop_full_nan = True, sort_index = False, warn_empty_column = True) The method that effectively reads the input dataframe (automatically called in __init__). :Parameters: **df** : :class:`pandas.DataFrame` The dataframe to read. **drop_full_nan** : bool Should we drop rows full of nans? (except index) **sort_index** : bool Should we lexsort index? (Keep False as default so not to break many of the downstream tests that check order...) **warn_empty_column** : bool Should we warn when there are empty columns? .. !! processed by numpydoc !! .. py:class:: Data Bases: :py:obj:`collections.abc.Iterable` Main data container for a collection of individuals It can be iterated over and sliced, both of these operations being applied to the underlying `individuals` attribute. :Attributes: **individuals** : :class:`~leaspy.utils.typing.Dict` [:class:`~leaspy.utils.typing.IDType` , :class:`~leaspy.individual_data.IndividualData`] Included individuals and their associated data **iter_to_idx** : :class:`~leaspy.utils.typing.Dict` [:obj:`int`, :class:`~leaspy.utils.typing.IDType`] Maps an integer index to the associated individual ID **headers** : :class:`~leaspy.utils.typing.List` [:class:`~leaspy.utils.typing.FeatureType`] Feature names **dimension** : :obj:`int` Number of features **n_individuals** : :obj:`int` Number of individuals **n_visits** : :obj:`int` Total number of visits **cofactors** : :class:`~leaspy.utils.typing.List` [:class:`~leaspy.utils.typing.FeatureType`] Feature names corresponding to cofactors **event_time_name** : :obj:`str` Name of the header that store the time at event in the original dataframe **event_bool_name** : :obj:`str` Name of the header that store the bool at event (censored or observed) in the original dataframe .. !! processed by numpydoc !! .. py:attribute:: individuals :type: dict[leaspy.utils.typing.IDType, leaspy.io.data.individual_data.IndividualData] .. py:attribute:: iter_to_idx :type: dict[int, leaspy.utils.typing.IDType] .. py:attribute:: headers :type: Optional[list[leaspy.utils.typing.FeatureType]] :value: None .. py:attribute:: event_time_name :type: Optional[str] :value: None .. py:attribute:: event_bool_name :type: Optional[str] :value: None .. py:attribute:: covariate_names :type: Optional[list[str]] :value: None .. py:property:: dimension :type: Optional[int] Number of features :Returns: :obj:`int` or None: Number of features in the dataset. If no features are present, returns None. .. !! processed by numpydoc !! .. py:property:: n_individuals :type: int Number of individuals :Returns: :obj:`int`: Number of individuals in the dataset. .. !! processed by numpydoc !! .. py:property:: n_visits :type: int Total number of visits :Returns: :obj:`int`: Total number of visits in the dataset. .. !! processed by numpydoc !! .. py:property:: cofactors :type: list[leaspy.utils.typing.FeatureType] Feature names corresponding to cofactors :Returns: :class:`~leaspy.utils.typing.List` [:class:`~leaspy.utils.typing.FeatureType`]: List of feature names corresponding to cofactors. .. !! processed by numpydoc !! .. py:method:: load_cofactors(df, *, cofactors = None) Load cofactors from a `pandas.DataFrame` to the `Data` object :Parameters: **df** : :obj:`pandas.DataFrame` The dataframe where the cofactors are stored. Its index should be ID, the identifier of subjects and it should uniquely index the dataframe (i.e. one row per individual). **cofactors** : :class:`~leaspy.utils.typing.List` [:class:`~leaspy.utils.typing.FeatureType`], optional Names of the column(s) of dataframe which shall be loaded as cofactors. If None, all the columns from the input dataframe will be loaded as cofactors. Default: None .. !! processed by numpydoc !! .. py:method:: from_csv_file(path, data_type = 'visit', *, pd_read_csv_kws = {}, facto_kws = {}, **df_reader_kws) :staticmethod: Create a `Data` object from a CSV file. :Parameters: **path** : :obj:`str` Path to the CSV file to load (with extension) **data_type** : :obj:`str` Type of data to read. Can be 'visit' or 'event'. **pd_read_csv_kws** : :obj:`dict` Keyword arguments that are sent to :func:`pandas.read_csv` **facto_kws** : :obj:`dict` Keyword arguments **\*\*df_reader_kws** Keyword arguments that are sent to :class:`~AbstractDataframeDataReader` to :func:`dataframe_data_reader_factory` :Returns: :class:`~leaspy.utils.typing.Data`: A Data object containing the data from the CSV file. .. !! processed by numpydoc !! .. py:method:: to_dataframe(*, cofactors = None, reset_index = True) Convert the Data object to a :obj:`pandas.DataFrame` :Parameters: **cofactors** : :class:`~leaspy.utils.typing.List` [:class:`~leaspy.utils.typing.FeatureType`] or :obj:`int`, optional Cofactors to include in the DataFrame. If None (default), no cofactors are included. If "all", all the available cofactors are included. Default: None **reset_index** : :obj:`bool`, optional Whether to reset index levels in output. Default: True :Returns: :obj:`pandas.DataFrame`: A DataFrame containing the individuals' ID, timepoints and associated observations (optional - and cofactors). :Raises: :exc:`.LeaspyDataInputError` If the Data object does not contain any cofactors. :exc:`.LeaspyTypeError` If the cofactors argument is not of a valid type. .. !! processed by numpydoc !! .. py:method:: from_dataframe(df, data_type = 'visit', factory_kws = {}, **kws) :staticmethod: Create a `Data` object from a :class:`~pandas.DataFrame`. :Parameters: **df** : :obj:`pandas.DataFrame` Dataframe containing ID, TIME and features. **data_type** : :obj:`str` Type of data to read. Can be 'visit', 'event', 'joint' **factory_kws** : :class:`~leaspy.utils.typing.Dict` Keyword arguments that are sent to :func:`.dataframe_data_reader_factory` **\*\*kws** Keyword arguments that are sent to :class:`~leaspy.utils.typing.DataframeDataReader` :Returns: :class:`~leaspy.utils.typing.Data` .. .. !! processed by numpydoc !! .. py:method:: from_individual_values(indices, timepoints = None, values = None, headers = None, event_time_name = None, event_bool_name = None, event_time = None, event_bool = None, covariate_names = None, covariates = None) :staticmethod: Construct `Data` from a collection of individual data points :Parameters: **indices** : :class:`~leaspy.utils.typing.List` [:class:`~leaspy.utils.typing.IDType`] List of the individuals' unique ID **timepoints** : :class:`~leaspy.utils.typing.List` [:class:`~leaspy.utils.typing.List` [:obj:`float`]] For each individual ``i``, list of timepoints associated with the observations. The number of such timepoints is noted ``n_timepoints_i`` **values** : :class:`~leaspy.utils.typing.List` [:obj:`array-like` [:obj:`float`, :obj:`2D`]] For each individual ``i``, two-dimensional array-like object containing observed data points. Its expected shape is ``(n_timepoints_i, n_features)`` **headers** : :class:`~leaspy.utils.typing.List` [:class:`~leaspy.utils.typing.FeatureType`] Feature names. The number of features is noted ``n_features`` :Returns: :class:`~leaspy.utils.typing.Data`: A Data object containing the individuals and their data. .. !! processed by numpydoc !! .. py:method:: from_individuals(individuals, headers = None, event_time_name = None, event_bool_name = None, covariate_names = None) :staticmethod: Construct `Data` from a list of individuals :Parameters: **individuals** : :class:`~leaspy.utils.typing.List` [:class:`~leaspy.individual_data.IndividualData`] List of individuals **headers** : :class:`~leaspy.utils.typing.List` [:class:`~leaspy.utils.typing.FeatureType`] List of feature names :Returns: :class:`~leaspy.utils.typing.Data`: A Data object containing the individuals and their data. .. !! processed by numpydoc !! .. py:method:: extract_longitudinal_only() Extract longitudinal data from the Data object :Returns: :class:`~leaspy.utils.typing.Data`: A Data object containing only longitudinal data. :Raises: :exc:`.LeaspyDataInputError` If the Data object does not contain any longitudinal data. .. !! processed by numpydoc !! .. py:class:: Dataset(data, *, no_warning = False) Data container based on :class:`torch.Tensor`, used to run algorithms. :Parameters: **data** : :class:`~leaspy.io.data.Data` Create `Dataset` from `Data` object **no_warning** : :obj:`bool`, default False Whether to deactivate warnings that are emitted by methods of this dataset instance. We may want to deactivate them because we rebuild a dataset per individual in scipy minimize. Indeed, all relevant warnings certainly occurred for the overall dataset. :Attributes: **headers** : :obj:`list` [:obj:`str`] Features names **dimension** : :obj:`int` Number of features **n_individuals** : :obj:`int` Number of individuals **indices** : :obj:`list` [:class:`~leaspy.utils.typing.IDType`] Order of patients **event_time** : :obj:`torch.FloatTensor` Time of an event, if the event is censored, the time correspond to the last patient observation **event_bool** : :obj:`torch.BoolTensor` Boolean to indicate if an event is censored or not: 1 observed, 0 censored **n_visits_per_individual** : :obj:`list` [:obj:`int`] Number of visits per individual **n_visits_max** : :obj:`int` Maximum number of visits for one individual **n_visits** : :obj:`int` Total number of visits **n_observations_per_ind_per_ft** : :obj:`torch.LongTensor`, shape (n_individuals, dimension) Number of observations (not taking into account missing values) per individual per feature **n_observations_per_ft** : :obj:`torch.LongTensor`, shape (dimension,) Total number of observations per feature **n_observations** : :obj:`int` Total number of observations **timepoints** : :obj:`torch.FloatTensor`, shape (n_individuals, n_visits_max) Ages of patients at their different visits **values** : :obj:`torch.FloatTensor`, shape (n_individuals, n_visits_max, dimension) Values of patients for each visit for each feature **mask** : :obj:`torch.FloatTensor`, shape (n_individuals, n_visits_max, dimension) Binary mask associated to values. If 1: value is meaningful If 0: value is meaningless (either was nan or does not correspond to a real visit - only here for padding) **L2_norm_per_ft** : :obj:`torch.FloatTensor`, shape (dimension,) Sum of all non-nan squared values, feature per feature **L2_norm** : scalar :obj:`torch.FloatTensor` Sum of all non-nan squared values **no_warning** : :obj:`bool`, default False Whether to deactivate warnings that are emitted by methods of this dataset instance. We may want to deactivate them because we rebuild a dataset per individual in scipy minimize. Indeed, all relevant warnings certainly occurred for the overall dataset. **_one_hot_encoding** : :obj:`dict` [:obj:`bool`, :obj:`torch.LongTensor`] Values of patients for each visit for each feature, but tensorized into a one-hot encoding (pdf or sf) Shapes of tensors are (n_individuals, n_visits_max, dimension, max_ordinal_level [-1 when `sf=True`]) :Raises: :exc:`.LeaspyInputError` if data, model or algo are not compatible together. .. !! processed by numpydoc !! .. py:attribute:: n_individuals .. py:attribute:: indices .. py:attribute:: headers :type: list[leaspy.utils.typing.FeatureType] .. py:attribute:: dimension :type: int .. py:attribute:: n_visits :type: int .. py:attribute:: timepoints :type: Optional[torch.FloatTensor] :value: None .. py:attribute:: values :type: Optional[torch.FloatTensor] :value: None .. py:attribute:: mask :type: Optional[torch.FloatTensor] :value: None .. py:attribute:: n_observations :type: Optional[int] :value: None .. py:attribute:: n_observations_per_ft :type: Optional[torch.LongTensor] :value: None .. py:attribute:: n_observations_per_ind_per_ft :type: Optional[torch.LongTensor] :value: None .. py:attribute:: n_visits_per_individual :type: Optional[list[int]] :value: None .. py:attribute:: n_visits_max :type: Optional[int] :value: None .. py:attribute:: event_time_name :type: Optional[str] .. py:attribute:: event_bool_name :type: Optional[str] .. py:attribute:: event_time :type: Optional[torch.FloatTensor] :value: None .. py:attribute:: event_bool :type: Optional[torch.IntTensor] :value: None .. py:attribute:: covariate_names :type: Optional[list[str]] .. py:attribute:: covariates :type: Optional[torch.IntTensor] :value: None .. py:attribute:: L2_norm_per_ft :type: Optional[torch.FloatTensor] :value: None .. py:attribute:: L2_norm :type: Optional[torch.FloatTensor] :value: None .. py:attribute:: no_warning :value: False .. py:method:: get_times_patient(i) Get ages for patient number ``i`` :Parameters: **i** : :obj:`int` The index of the patient ( not its identifier) :Returns: :obj:`torch.Tensor`, shape (n_obs_of_patient,) Contains float .. !! processed by numpydoc !! .. py:method:: get_event_patient(idx_patient) Get ages at event for patient number ``idx_patient`` :Parameters: **idx_patient** : :obj:`int` The index of the patient ( not its identifier) :Returns: :obj:`tuple` [:obj:`torch.Tensor`, :obj:`torch.Tensor`] , shape (n_obs_of_patient,) Contains float .. !! processed by numpydoc !! .. py:method:: get_covariates_patient(idx_patient) Get covariates for patient number ``idx_patient`` :Parameters: **idx_patient** : :obj:`int` The index of the patient ( not its identifier) :Returns: :obj:`torch.Tensor`, shape (n_obs_of_patient,) Contains float :Raises: :exc:`.ValueError` If the dataset has no covariates. .. !! processed by numpydoc !! .. py:method:: get_values_patient(i, *, adapt_for_model=None) Get values for patient number ``i``, with nans. :Parameters: **i** : :obj:`int` The index of the patient ( not its identifier) **adapt_for_model** : None, default or :class:`~leaspy.models.mcmc_saem_compatible.McmcSaemCompatibleModel` The values returned are suited for this model. In particular: * For model with `noise_model='ordinal'` will return one-hot-encoded values [P(X = l), l=0..ordinal_max_level] * For model with `noise_model='ordinal_ranking'` will return survival function values [P(X > l), l=0..ordinal_max_level-1] If None, we return the raw values, whatever the model is. :Returns: :obj:`torch.Tensor`, shape (n_obs_of_patient, dimension [, extra_dimension_for_ordinal_models]) Contains float or nans .. !! processed by numpydoc !! .. py:method:: to_pandas(apply_headers = False) Convert dataset to a `DataFrame` with ['ID', 'TIME'] index, with all covariates, events and repeated measures if apply_headers is False, and only the repeated measures otherwise. :Parameters: **apply_headers** : :obj:`bool` Enable to select only the columns that are needed for leaspy fit (headers attribute) :Returns: :obj:`pandas.DataFrame` DataFrame with index ['ID', 'TIME'] and columns corresponding to the features, events and covariates. :Raises: :exc:`.LeaspyInputError` If the index of the DataFrame is not unique or contains invalid values. .. !! processed by numpydoc !! .. py:method:: move_to_device(device) Moves the dataset to the specified device. :Parameters: **device** : :obj:`torch.device` .. .. !! processed by numpydoc !! .. py:method:: get_one_hot_encoding(*, sf, ordinal_infos) Builds the one-hot encoding of ordinal data once and for all and returns it. :Parameters: **sf** : :obj:`bool` Whether the vector should be the survival function [1(X > l), l=0..max_level-1] instead of the probability density function [1(X=l), l=0..max_level] **ordinal_infos** : :class:`~leaspy.utils.typing.KwargsType` All the hyperparameters concerning ordinal modelling (in particular maximum level per features) :Returns: :obj:`torch.LongTensor` One-hot encoding of data values. :Raises: :exc:`.LeaspyInputError` If the values are not non-negative integers or if the features in `ordinal_infos` are not consistent with the dataset headers. .. !! processed by numpydoc !! .. py:class:: EventDataframeDataReader(*, event_time_name = 'EVENT_TIME', event_bool_name = 'EVENT_BOOL', nb_events = None) Bases: :py:obj:`leaspy.io.data.abstract_dataframe_data_reader.AbstractDataframeDataReader` Methods to convert :class:`pandas.DataFrame` to `Leaspy`-compliant data containers for event data only. :Parameters: **event_time_name: str** Name of the columns in dataframe that contains the time of event **event_bool_name: str** Name of the columns in dataframe that contains if the event is censored of not :Raises: :exc:`.LeaspyDataInputError` .. .. !! processed by numpydoc !! .. py:attribute:: event_time_name :value: 'EVENT_TIME' .. py:attribute:: event_bool_name :value: 'EVENT_BOOL' .. py:attribute:: nb_events :value: None .. py:data:: DataframeDataReaderFactoryInput .. py:class:: DataframeDataReaderNames(*args, **kwds) Bases: :py:obj:`enum.Enum` Enumeration defining the possible names for observation models. .. !! processed by numpydoc !! .. py:attribute:: EVENT :value: 'event' .. py:attribute:: VISIT :value: 'visit' .. py:attribute:: JOINT :value: 'joint' .. py:attribute:: COVARIATE :value: 'covariate' .. py:method:: from_string(reader_name) :classmethod: Returns the enum member corresponding to the given string. :Parameters: **reader_name** : :obj:`str` The name of the reader, case-insensitive. :Returns: :class:`~leaspy.io.data.factory.DataframeDataReaderNames` The corresponding enum member. :Raises: :exc:`NotImplementedError` If the provided `reader_name` does not match any of the enum members and is not implemented. Give the valid names in the error message. .. !! processed by numpydoc !! .. py:function:: dataframe_data_reader_factory(reader, **kwargs) Factory for observation models. :Parameters: **model** : :obj:`str` or :class:`~leaspy.models.obs_models` or :obj:`dict` [ :obj:`str`, ...] - If :class:`~leaspy.models.obs_models`, returns the instance. - If a string, then returns a new instance of the appropriate class (with optional parameters `kws`). - If a dictionary, it must contain the 'name' key and other initialization parameters. **\*\*kwargs** Optional parameters for initializing the requested observation model when a string. :Returns: :class:`~leaspy.io.data.abstract_dataframe_data_reader.AbstractDataframeDataReader` The desired observation model. :Raises: :exc:`.LeaspyModelInputError` If `model` is not supported. .. !! processed by numpydoc !! .. py:class:: IndividualData(idx) Container for an individual's data :Parameters: **idx** : IDType Unique ID :Attributes: **idx** : :class:`~leaspy.utils.typing.IDType` Unique ID **timepoints** : :obj:`np.ndarray` [:obj:`float`] Timepoints associated with the observations 1D array **observations** : :obj:`np.ndarray` [:obj:`float`] Observed data points, Shape is ``(n_timepoints, n_features)`` **cofactors** : :obj:`dict` [:class:`~leaspy.utils.typing.FeatureType`, :class:`~leaspy.utils.typing.Any`] Cofactors in the form {cofactor_name: cofactor_value} **event_time** : :obj:`float` Time of an event, if the event is censored, the time correspond to the last patient observation **event_bool** : :obj:`bool` Boolean to indicate if an event is censored or not: 1 observed, 0 censored .. !! processed by numpydoc !! .. py:attribute:: idx :type: leaspy.utils.typing.IDType .. py:attribute:: timepoints :type: numpy.ndarray :value: None .. py:attribute:: observations :type: numpy.ndarray :value: None .. py:attribute:: event_time :type: Optional[numpy.ndarray] :value: None .. py:attribute:: event_bool :type: Optional[numpy.ndarray] :value: None .. py:attribute:: cofactors :type: dict[leaspy.utils.typing.FeatureType, Any] .. py:attribute:: covariates :type: Optional[numpy.ndarray] :value: None .. py:method:: add_observations(timepoints, observations) Include new observations and associated timepoints :Parameters: **timepoints** : :obj:`array-like` [:obj:`float`] Timepoints associated with the observations to include, 1D array **observations** : :obj:`array-like` [:obj:`float`] Observations to include, 2D array :Raises: :exc:`.LeaspyDataInputError` .. .. !! processed by numpydoc !! .. py:method:: add_event(event_time, event_bool) Include event time and associated censoring bool :Parameters: **event_time** : :obj:`float` Time of the event **event_bool** : :obj:`float` 0 if censored (not observed) and 1 if observed .. !! processed by numpydoc !! .. py:method:: add_covariates(covariates) Include covariates :Parameters: **covariates** : :obj:`array-like` [:obj:`float`] Covariates to include, 2D array .. !! processed by numpydoc !! .. py:method:: add_cofactors(cofactors) Include new cofactors :Parameters: **cofactors** : :obj:`dict` [:class:`~leaspy.utils.typing.FeatureType`, :class:`~leaspy.utils.typing.Any`] Cofactors to include, in the form `{name: value}` :Raises: :exc:`.LeaspyDataInputError` .. :exc:`.LeaspyTypeError` .. .. !! processed by numpydoc !! .. py:method:: to_frame(headers, event_time_name, event_bool_name, covariate_names) Convert the individual data to a pandas DataFrame :Parameters: **headers** : :obj:`list` [:obj:`str`] List of feature names for the observations **event_time_name** : :obj:`str` Name of the column for the event time **event_bool_name** : :obj:`str` Name of the column for the event boolean (0 or 1) **covariate_names** : :obj:`list` [:obj:`str`] List of covariate names :Returns: :obj:`pd.DataFrame` DataFrame containing the individual's data with the following columns: * ID: Unique identifier for the individual * TIME: Timepoints associated with the observations * Observations: Observed data points for each feature * Event Time: Time of the event (if any) * Event Boolean: Boolean indicating if the event was observed (1) or censored (0) * Covariates: Values of the covariates for the individual .. !! processed by numpydoc !! .. py:class:: JointDataframeDataReader(*, event_time_name = 'EVENT_TIME', event_bool_name = 'EVENT_BOOL', nb_events = None) Bases: :py:obj:`leaspy.io.data.abstract_dataframe_data_reader.AbstractDataframeDataReader` Methods to convert :class:`pandas.DataFrame` to `Leaspy`-compliant data containers for event data and longitudinal data. :Parameters: **event_time_name: str** Name of the columns in dataframe that contains the time of event **event_bool_name: str** Name of the columns in dataframe that contains if the event is censored of not :Raises: :exc:`.LeaspyDataInputError` .. .. !! processed by numpydoc !! .. py:attribute:: tol_diff :value: 0.001 .. py:attribute:: visit_reader .. py:attribute:: event_reader .. py:property:: event_time_name :type: str Name of the event time column in dataset .. !! processed by numpydoc !! .. py:property:: event_bool_name :type: str Name of the event bool column in dataset .. !! processed by numpydoc !! .. py:property:: dimension :type: Optional[int] Number of longitudinal outcomes in dataset. .. !! processed by numpydoc !! .. py:property:: long_outcome_names :type: list[leaspy.utils.typing.FeatureType] Name of the longitudinal outcomes in dataset .. !! processed by numpydoc !! .. py:property:: n_visits :type: int Number of visit in the dataset .. !! processed by numpydoc !! .. py:class:: VisitDataframeDataReader Bases: :py:obj:`leaspy.io.data.abstract_dataframe_data_reader.AbstractDataframeDataReader` Methods to convert :class:`pandas.DataFrame` to `Leaspy`-compliant data containers for longitudinal data only. Raises ------ :exc:`.LeaspyDataInputError` .. !! processed by numpydoc !! .. py:property:: dimension :type: Optional[int] Number of longitudinal outcomes in dataset. :Returns: : :obj:`int` Number of longitudinal outcomes in dataset .. !! processed by numpydoc !!