Skip to content

Disjoint-time-based config class

cesnet_tszoo.configs.disjoint_time_based_config.DisjointTimeBasedConfig

Bases: SeriesBasedHandler, TimeBasedHandler, DatasetConfig

This class is used for configuring the DisjointTimeBasedCesnetDataset.

Used to configure the following:

  • Train, validation, test, all sets (time period, sizes, features, window size)
  • Handling missing values (default values, fillers)
  • Handling anomalies (anomaly handlers)
  • Data transformation using transformers
  • Applying custom handlers (custom handlers)
  • Changing order of preprocesses
  • Dataloader options (train/val/test/all/init workers, batch sizes)
  • Plotting

Important Notes:

  • Custom fillers must inherit from the fillers base class.
  • Custom anomaly handlers must inherit from the anomaly handlers base class.
  • Selected anomaly handler is only used for train set.
  • It is recommended to use the transformers base class, though this is not mandatory as long as it meets the required methods.
    • If a transformer is already initialized and partial_fit_initialized_transformers is False, the transformer does not require partial_fit.
    • Otherwise, the transformer must support partial_fit.
    • Transformers must implement transform method.
    • Both partial_fit and transform methods must accept an input of type np.ndarray with shape (times, features).
  • Custom handlers must be derived from one of the built-in custom handler classes
  • train_time_period, val_time_period, test_time_period can overlap, but they should keep order of train_time_period < val_time_period < test_time_period
Source code in cesnet_tszoo\configs\disjoint_time_based_config.py
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
class DisjointTimeBasedConfig(SeriesBasedHandler, TimeBasedHandler, DatasetConfig):
    """
    This class is used for configuring the [`DisjointTimeBasedCesnetDataset`](reference_disjoint_time_based_cesnet_dataset.md#cesnet_tszoo.datasets.disjoint_time_based_cesnet_dataset.DisjointTimeBasedCesnetDataset).

    Used to configure the following:

    - Train, validation, test, all sets (time period, sizes, features, window size)
    - Handling missing values (default values, [`fillers`](reference_fillers.md#cesnet_tszoo.utils.filler.filler))
    - Handling anomalies ([`anomaly handlers`](reference_anomaly_handlers.md#cesnet_tszoo.utils.anomaly_handler.anomaly_handler))
    - Data transformation using [`transformers`](reference_transformers.md#cesnet_tszoo.utils.transformer.transformer)
    - Applying custom handlers ([`custom handlers`](reference_custom_handlers.md#cesnet_tszoo.utils.custom_handler.custom_handler))
    - Changing order of preprocesses
    - Dataloader options (train/val/test/all/init workers, batch sizes)
    - Plotting

    **Important Notes:**

    - Custom fillers must inherit from the [`fillers`](reference_fillers.md#cesnet_tszoo.utils.filler.filler.Filler) base class.
    - Custom anomaly handlers must inherit from the [`anomaly handlers`](reference_anomaly_handlers.md#cesnet_tszoo.utils.anomaly_handler.anomaly_handler.AnomalyHandler) base class.
    - Selected anomaly handler is only used for train set.
    - It is recommended to use the [`transformers`](reference_transformers.md#cesnet_tszoo.utils.transformer.transformer.Transformer) base class, though this is not mandatory as long as it meets the required methods.
        - If a transformer is already initialized and `partial_fit_initialized_transformers` is `False`, the transformer does not require `partial_fit`.
        - Otherwise, the transformer must support `partial_fit`.
        - Transformers must implement `transform` method.
        - Both `partial_fit` and `transform` methods must accept an input of type `np.ndarray` with shape `(times, features)`.
    - Custom handlers must be derived from one of the built-in [`custom handler`](reference_custom_handlers.md#cesnet_tszoo.utils.custom_handler.custom_handler) classes 
    - `train_time_period`, `val_time_period`, `test_time_period` can overlap, but they should keep order of `train_time_period` < `val_time_period` < `test_time_period`

    Attributes:
        used_train_workers: Tracks the number of train workers in use. Helps determine if the train dataloader should be recreated based on worker changes.
        used_val_workers: Tracks the number of validation workers in use. Helps determine if the validation dataloader should be recreated based on worker changes.
        used_test_workers: Tracks the number of test workers in use. Helps determine if the test dataloader should be recreated based on worker changes.
        uses_all_time_period: Whether all time period set should be used.
        uses_all_ts: Whether all time series set should be used.
        import_identifier: Tracks the name of the config upon import. None if not imported.
        filler_factory: Represents factory used to create passed Filler type.
        anomaly_handler_factory: Represents factory used to create passed Anomaly Handler type.
        transformer_factory: Represents factory used to create passed Transformer type.
        can_fit_fillers: Whether fillers in this config, can be fitted.        
        logger: Logger for displaying information.     
        display_train_time_period: Used to display the configured value of `train_time_period`.
        display_val_time_period: Used to display the configured value of `val_time_period`.
        display_test_time_period: Used to display the configured value of `test_time_period`.
        train_ts_row_ranges: Initialized when `train_ts` is set. Contains time series IDs in train set with their respective time ID ranges.
        val_ts_row_ranges: Initialized when `val_ts` is set. Contains time series IDs in validation set with their respective time ID ranges.
        test_ts_row_ranges: Initialized when `test_ts` is set. Contains time series IDs in test set with their respective time ID ranges.        
        all_time_period: Contains total used time period.
        all_ts: Contains all used time series.
        all_ts_row_ranges: Contains time series IDs in all set with their respective time ID ranges.
        aggregation: The aggregation period used for the data.
        source_type: The source type of the data.
        database_name: Specifies which database this config applies to.
        features_to_take_without_ids: Features to be returned, excluding time or time series IDs.
        indices_of_features_to_take_no_ids: Indices of non-ID features in `features_to_take`.
        ts_id_name: Name of the time series ID, dependent on `source_type`.
        used_singular_train_time_series: Currently used singular train set time series for dataloader.
        used_singular_val_time_series: Currently used singular validation set time series for dataloader.
        used_singular_test_time_series: Currently used singular test set time series for dataloader.     
        train_preprocess_order: All preprocesses used for train set. 
        val_preprocess_order: All preprocesses used for val set. 
        test_preprocess_order: All preprocesses used for test set.      
        is_initialized: Flag indicating if the configuration has already been initialized. If true, config initialization will be skipped.  
        version: Version of cesnet-tszoo this config was made in.
        export_update_needed: Whether config was updated to newer version and should be exported.     
        train_ts: Defines which time series IDs are used in the training set. Can be a list of IDs, or an integer/float to specify a random selection. An `int` specifies the number of random time series, and a `float` specifies the proportion of available time series. 
                  `int` and `float` must be greater than 0, and a float should be smaller or equal to 1.0. Using `int` or `float` guarantees that no time series from other sets will be used. Must be used with `train_time_period`.
        val_ts: Defines which time series IDs are used in the validation set. Same as `train_ts` but for the validation set. Must be used with `val_time_period`.
        test_ts: Defines which time series IDs are used in the test set. Same as `train_ts` but for the test set. Must be used with `test_time_period`.
        train_time_period: Defines the time period for training set. Can be a range of time IDs or a tuple of datetime objects. Float value is equivalent to percentage of available times with offseted position from previous used set. Must be used with `train_ts`.
        val_time_period: Defines the time period for validation set. Can be a range of time IDs or a tuple of datetime objects. Float value is equivalent to percentage of available times with offseted position from previous used set. Must be used with `val_ts`.
        test_time_period: Defines the time period for test set. Can be a range of time IDs or a tuple of datetime objects. Must be used with `test_ts`.
        features_to_take: Defines which features are used.           
        default_values: Default values for missing data, applied before fillers. Can set one value for all features or specify for each feature.
        sliding_window_size: Number of times in one window. Impacts dataloader behavior. Batch sizes affects how much data will be cached for creating windows.
        sliding_window_prediction_size: Number of times to predict from sliding_window_size. Impacts dataloader behavior. Batch sizes affects how much data will be cached for creating windows.
        sliding_window_step: Number of times to move by after each window.
        set_shared_size: How much times should time periods share. Order of sharing is training set < validation set < test set. Only in effect if sets share less values than set_shared_size. Use float value for percentage of total times or int for count.
        train_batch_size: Batch size for the train dataloader. Affects number of returned times in one batch.
        val_batch_size: Batch size for the validation dataloader. Affects number of returned times in one batch.
        test_batch_size: Batch size for the test dataloader. Affects number of returned times in one batch.
        preprocess_order: Defines in which order preprocesses are used. Also can add to order a type of `AllSeriesCustomHandler` or `NoFitCustomHandler`.
        partial_fit_initialized_transformers: If `True`, partial fitting on train set is performed when using initiliazed transformers.
        include_time: If `True`, time data is included in the returned values.
        include_ts_id: If `True`, time series IDs are included in the returned values.
        time_format: Format for the returned time data. When using TimeFormat.DATETIME, time will be returned as separate list along rest of the values.
        train_workers: Number of workers for loading training data. `0` means that the data will be loaded in the main process.
        val_workers: Number of workers for loading validation data. `0` means that the data will be loaded in the main process.
        test_workers: Number of workers for loading test. `0` means that the data will be loaded in the main process.
        init_workers: Number of workers for initial dataset processing during configuration. `0` means that the data will be loaded in the main process.
        nan_threshold: Maximum allowable percentage of missing data. Time series exceeding this threshold are excluded. Time series over the threshold will not be used. Used for `train/val/test/all` separately.
        random_state: Fixes randomness for reproducibility during configuration and dataset initialization.              
    """

    def __init__(self,
                 train_ts: list[int] | npt.NDArray[np.int_] | float | int | None,
                 val_ts: list[int] | npt.NDArray[np.int_] | float | int | None,
                 test_ts: list[int] | npt.NDArray[np.int_] | float | int | None,
                 train_time_period: tuple[datetime, datetime] | range | float | None = None,
                 val_time_period: tuple[datetime, datetime] | range | float | None = None,
                 test_time_period: tuple[datetime, datetime] | range | float | None = None,
                 features_to_take: list[str] | Literal["all"] = "all",
                 default_values: list[Number] | npt.NDArray[np.number] | dict[str, Number] | Number | Literal["default"] | None = "default",
                 sliding_window_size: int | None = None,
                 sliding_window_prediction_size: int | None = None,
                 sliding_window_step: int = 1,
                 set_shared_size: float | int = 0,
                 train_batch_size: int = 32,
                 val_batch_size: int = 64,
                 test_batch_size: int = 128,
                 preprocess_order: list[str, type] = ["handling_anomalies", "filling_gaps", "transforming"],
                 fill_missing_with: type | FillerType | Literal["mean_filler", "forward_filler", "linear_interpolation_filler"] | None = None,
                 transform_with: type | list[Transformer] | np.ndarray[Transformer] | TransformerType | Transformer | Literal["min_max_scaler", "standard_scaler", "max_abs_scaler", "log_transformer", "l2_normalizer"] | None = None,
                 handle_anomalies_with: type | AnomalyHandlerType | Literal["z-score", "interquartile_range"] | None = None,
                 partial_fit_initialized_transformer: bool = False,
                 include_time: bool = True,
                 include_ts_id: bool = True,
                 time_format: TimeFormat | Literal["id_time", "datetime", "unix_time", "shifted_unix_time"] = TimeFormat.ID_TIME,
                 train_workers: int = 4,
                 val_workers: int = 3,
                 test_workers: int = 2,
                 init_workers: int = 4,
                 nan_threshold: float = 1.0,
                 random_state: int | None = None):
        """
        Parameters:
            train_ts: Defines which time series IDs are used in the training set. Can be a list of IDs, or an integer/float to specify a random selection. An `int` specifies the number of random time series, and a `float` specifies the proportion of available time series. 
                    `int` and `float` must be greater than 0, and a float should be smaller or equal to 1.0. Using `int` or `float` guarantees that no time series from other sets will be used. Must be used with `train_time_period`.
            val_ts: Defines which time series IDs are used in the validation set. Same as `train_ts` but for the validation set. Must be used with `val_time_period`.
            test_ts: Defines which time series IDs are used in the test set. Same as `train_ts` but for the test set. Must be used with `test_time_period`.
            train_time_period: Defines the time period for training set. Can be a range of time IDs or a tuple of datetime objects. Float value is equivalent to percentage of available times with offseted position from previous used set. Must be used with `train_ts`. `Default: None`
            val_time_period: Defines the time period for validation set. Can be a range of time IDs or a tuple of datetime objects. Float value is equivalent to percentage of available times with offseted position from previous used set. Must be used with `val_ts`. `Default: None`
            test_time_period: Defines the time period for test set. Can be a range of time IDs or a tuple of datetime objects. Must be used with `test_ts`. `Default: None`
            features_to_take: Defines which features are used. `Default: "all"`                  
            default_values: Default values for missing data, applied before fillers. Can set one value for all features or specify for each feature. `Default: "default"`
            sliding_window_size: Number of times in one window. Impacts dataloader behavior. Batch sizes affects how much data will be cached for creating windows. `Default: None`
            sliding_window_prediction_size: Number of times to predict from sliding_window_size. Impacts dataloader behavior. Batch sizes affects how much data will be cached for creating windows. `Default: None`
            sliding_window_step: Number of times to move by after each window. `Default: 1`
            set_shared_size: How much times should time periods share. Order of sharing is training set < validation set < test set. Only in effect if sets share less values than set_shared_size. Use float value for percentage of total times or int for count. `Default: 0`
            train_batch_size: Batch size for the train dataloader. Affects number of returned times in one batch. `Default: 32`
            val_batch_size: Batch size for the validation dataloader. Affects number of returned times in one batch. `Default: 64`
            test_batch_size: Batch size for the test dataloader. Affects number of returned times in one batch. `Default: 128`
            preprocess_order: Defines in which order preprocesses are used. Also can add to order a type of `AllSeriesCustomHandler` or `NoFitCustomHandler`. `Default: ["handling_anomalies", "filling_gaps", "transforming"]`
            fill_missing_with: Defines how to fill missing values in the dataset. Can pass enum `FillerType` for built-in filler or pass a type of custom filler that must derive from `Filler` base class. `Default: None`        
            transform_with: Defines the transformer used to transform the dataset. Can pass enum `TransformerType`, pass a type of custom transformer or instance of already fitted transformer(s). `Default: None`
            handle_anomalies_with: Defines the anomaly handler for handling anomalies in the train set. Can pass enum `AnomalyHandlerType` for built-in anomaly handler or a type of custom anomaly handler. `Default: None`
            partial_fit_initialized_transformer: If `True`, partial fitting on train set is performed when using initiliazed transformers. `Default: False`
            include_time: If `True`, time data is included in the returned values. `Default: True`
            include_ts_id: If `True`, time series IDs are included in the returned values. `Default: True`
            time_format: Format for the returned time data. When using TimeFormat.DATETIME, time will be returned as separate list along rest of the values. `Default: TimeFormat.ID_TIME`
            train_workers: Number of workers for loading training data. `0` means that the data will be loaded in the main process. `Default: 4`
            val_workers: Number of workers for loading validation data. `0` means that the data will be loaded in the main process. `Default: 3`
            test_workers: Number of workers for loading test. `0` means that the data will be loaded in the main process. `Default: 2`
            init_workers: Number of workers for initial dataset processing during configuration. `0` means that the data will be loaded in the main process. `Default: 4`
            nan_threshold: Maximum allowable percentage of missing data. Time series exceeding this threshold are excluded. Time series over the threshold will not be used. Used for `train/val/test/all` separately. `Default: 1.0`
            random_state: Fixes randomness for reproducibility during configuration and dataset initialization. `Default: None`   
        """

        self.logger = logging.getLogger("disjoint_time_based_config")

        TimeBasedHandler.__init__(self, self.logger, train_batch_size, val_batch_size, test_batch_size, 1, False, sliding_window_size, sliding_window_prediction_size, sliding_window_step, set_shared_size, train_time_period, val_time_period, test_time_period)
        SeriesBasedHandler.__init__(self, self.logger, True, train_ts, val_ts, test_ts)
        DatasetConfig.__init__(self, features_to_take, default_values, train_batch_size, val_batch_size, test_batch_size, 1, preprocess_order, fill_missing_with, transform_with, handle_anomalies_with, partial_fit_initialized_transformer, include_time, include_ts_id, time_format,
                               train_workers, val_workers, test_workers, 1, init_workers, nan_threshold, False, DatasetType.DISJOINT_TIME_BASED, DataloaderOrder.SEQUENTIAL, random_state, False, self.logger)

    def _validate_construction(self) -> None:
        """Performs basic parameter validation to ensure correct configuration. More comprehensive validation, which requires dataset-specific data, is handled in [`_dataset_init`][cesnet_tszoo.configs.disjoint_time_based_config.DisjointTimeBasedConfig._dataset_init]. """

        DatasetConfig._validate_construction(self)

        if self.train_ts is None or self.train_time_period is None:
            if self.train_ts is not None:
                self.logger.error("When train_ts is not None you must set train_time_period or set train_ts as None.")
                raise ValueError("When train_ts is not None you must set train_time_period or set train_ts as None.")
            if self.train_time_period is not None:
                self.logger.error("When train_time_period is not None you must set train_ts or set train_time_period as None.")
                raise ValueError("When train_time_period is not None you must set train_ts or set train_time_period as None.")

        if self.val_ts is None or self.val_time_period is None:
            if self.val_ts is not None:
                self.logger.error("When val_ts is not None you must set val_time_period or set val_ts as None.")
                raise ValueError("When val_ts is not None you must set val_time_period or set val_ts as None.")
            if self.val_time_period is not None:
                self.logger.error("When val_time_period is not None you must set val_ts or set val_time_period as None.")
                raise ValueError("When val_time_period is not None you must set val_ts or set val_time_period as None.")

        if self.test_ts is None or self.test_time_period is None:
            if self.test_ts is not None:
                self.logger.error("When test_ts is not None you must set test_time_period or set test_ts as None.")
                raise ValueError("When test_ts is not None you must set test_time_period or set test_ts as None.")
            if self.test_time_period is not None:
                self.logger.error("When test_time_period is not None you must set test_ts or set test_time_period as None.")
                raise ValueError("When test_time_period is not None you must set test_ts or set test_time_period as None.")

        if self.train_ts is None and self.val_ts is None and self.test_ts is None:
            self.logger.error("No set for time series has been set. You must set at least one time series set and its respective time period.")
            raise ValueError("No set for time series has been set. You must set at least one time series set and its respective time period.")

        self._validate_time_periods_init()
        self._validate_ts_init()
        self._validate_set_shared_size_init()
        self._validate_sliding_window_init()
        self._update_batch_sizes(self.train_batch_size, self.val_batch_size, self.test_batch_size, self.all_batch_size)

        self.logger.debug("Disjoint-time-based configuration validated successfully.")

    def _update_batch_sizes(self, train_batch_size: int, val_batch_size: int, test_batch_size: int, all_batch_size: int) -> None:

        # Adjust batch sizes based on sliding_window_size
        if self.sliding_window_size is not None:

            if self.sliding_window_step <= 0:
                raise ValueError("sliding_window_step must be greater or equal to 1.")

            total_window_size = self.sliding_window_size + self.sliding_window_prediction_size

            if isinstance(self.train_batch_size, int) and total_window_size > self.train_batch_size:
                train_batch_size = self.sliding_window_size + self.sliding_window_prediction_size
                self.logger.info("train_batch_size adjusted to %s as it should be greater than or equal to sliding_window_size + sliding_window_prediction_size.", total_window_size)
            if isinstance(self.val_batch_size, int) and total_window_size > self.val_batch_size:
                val_batch_size = self.sliding_window_size + self.sliding_window_prediction_size
                self.logger.info("val_batch_size adjusted to %s as it should be greater than or equal to sliding_window_size + sliding_window_prediction_size.", total_window_size)
            if isinstance(self.test_batch_size, int) and total_window_size > self.test_batch_size:
                test_batch_size = self.sliding_window_size + self.sliding_window_prediction_size
                self.logger.info("test_batch_size adjusted to %s as it should be greater than or equal to sliding_window_size + sliding_window_prediction_size.", total_window_size)

        DatasetConfig._update_batch_sizes(self, train_batch_size, val_batch_size, test_batch_size, all_batch_size)

    def _update_sliding_window(self, sliding_window_size: int | None, sliding_window_prediction_size: int | None, sliding_window_step: int | None, set_shared_size: float | int, all_time_ids: np.ndarray):
        """Updates values related to sliding window. """
        TimeBasedHandler._update_sliding_window(self, sliding_window_size, sliding_window_prediction_size, sliding_window_step, set_shared_size, all_time_ids, self.has_train(), self.has_val(), self.has_test(), self.has_all())

    def _get_train(self) -> tuple[np.ndarray, np.ndarray] | tuple[None, None]:
        """Returns the indices corresponding to the training set. """
        return self.train_ts, self.train_time_period

    def _get_val(self) -> tuple[np.ndarray, np.ndarray] | tuple[None, None]:
        """Returns the indices corresponding to the validation set. """
        return self.val_ts, self.val_time_period

    def _get_test(self) -> tuple[np.ndarray, np.ndarray] | tuple[None, None]:
        """Returns the indices corresponding to the test set. """
        return self.test_ts, self.test_time_period

    def _get_all(self) -> tuple[np.ndarray, np.ndarray] | tuple[None, None]:
        """Returns the indices corresponding to the all set. """
        return None, None

    def has_train(self) -> bool:
        """Returns whether training set is used. """
        return self.train_ts is not None and self.train_time_period is not None

    def has_val(self) -> bool:
        """Returns whether validation set is used. """
        return self.val_ts is not None and self.val_time_period is not None

    def has_test(self) -> bool:
        """Returns whether test set is used. """
        return self.test_ts is not None and self.test_time_period is not None

    def has_all(self) -> bool:
        """Returns whether all set is used. """
        return False

    def _set_time_period(self, all_time_ids: np.ndarray) -> None:
        """Validates and filters `train_time_period`, `val_time_period`, `test_time_period` and `all_time_period` based on `dataset` and `aggregation`. """

        self._prepare_and_set_time_period_sets(all_time_ids, self.time_format)

    def _set_ts(self, all_ts_ids: np.ndarray, all_ts_row_ranges: np.ndarray, rd: np.random.RandomState) -> None:
        """ Validates and filters inputted time series id from `train_ts`, `val_ts` and `test_ts` based on `dataset` and `source_type`. Handles random set."""

        self._prepare_and_set_ts_sets(all_ts_ids, all_ts_row_ranges, self.ts_id_name, self.random_state, rd)

    def _get_feature_transformers(self) -> Transformer:
        """Creates transformer with `transformer_factory`. """

        if self.transformer_factory.has_already_initialized:
            if not self.has_train() and self.partial_fit_initialized_transformers:
                self.partial_fit_initialized_transformers = False
                self.logger.warning("partial_fit_initialized_transformers will be ignored because train set is not used.")

            transformers = self.transformer_factory.get_already_initialized_transformers()
            self.logger.debug("Using already initialized transformer %s.", self.transformer_factory.name)

        else:
            if not self.has_train() and not self.transformer_factory.is_empty_factory:
                self.transformer_factory = transformer_factories.get_transformer_factory(None, self.create_transformer_per_time_series, self.partial_fit_initialized_transformers)
                self.logger.warning("No transformer will be used because train set is not used.")

            transformers = self.transformer_factory.create_transformer()
            self.logger.debug("Using transformer %s.", self.transformer_factory.name)

        return transformers

    def _get_fillers(self) -> tuple:
        """Creates fillers with `filler_factory`. """

        train_fillers = None
        # Set the fillers for the training set
        if self.has_train():
            train_fillers = np.array([self.filler_factory.create_filler(self.features_to_take_without_ids) for _ in self.train_ts])
            self.logger.debug("Fillers for training set are set.")

        val_fillers = None
        # Set the fillers for the validation set
        if self.has_val():
            val_fillers = np.array([self.filler_factory.create_filler(self.features_to_take_without_ids) for _ in self.val_ts])
            self.logger.debug("Fillers for validation set are set.")

        test_fillers = None
        # Set the fillers for the test set
        if self.has_test():
            test_fillers = np.array([self.filler_factory.create_filler(self.features_to_take_without_ids) for _ in self.test_ts])
            self.logger.debug("Fillers for test set are set.")

        self.logger.debug("Using filler %s", self.filler_factory.name)

        return train_fillers, val_fillers, test_fillers, None

    def _get_anomaly_handlers(self) -> np.ndarray:
        """Creates anomaly handlers with `anomaly_handler_factory`. """

        if not self.has_train() and not self.anomaly_handler_factory.is_empty_factory:
            self.anomaly_handler_factory = anomaly_handler_factories.get_anomaly_handler_factory(None)
            self.logger.warning("No anomaly handler will be used because train set is not used.")

        anomaly_handlers = None
        if self.has_train():
            anomaly_handlers = np.array([self.anomaly_handler_factory.create_anomaly_handler() for _ in self.train_ts])

        self.logger.debug("Using anomaly handler %s", self.anomaly_handler_factory.name)

        return anomaly_handlers

    def _set_per_series_custom_handler(self, factory: PerSeriesCustomHandlerFactory):
        raise ValueError(f"Cannot use {factory.name} CustomHandler, because PerSeriesCustomHandler is not supported for {self.dataset_type}. Use AllSeriesCustomHandler or NoFitCustomHandler instead. ")

    def _set_no_fit_custom_handler(self, factory: NoFitCustomHandlerFactory):

        train_handlers = np.array([factory.create_handler() for _ in self.train_ts]) if self.has_train() else None
        self.train_preprocess_order.append(PreprocessNote(factory.preprocess_enum_type, False, False, factory.can_apply_to_train and self.has_train(), True, NoFitCustomHandlerHolder(train_handlers)))

        val_handlers = np.array([factory.create_handler() for _ in self.val_ts]) if self.has_val() else None
        self.val_preprocess_order.append(PreprocessNote(factory.preprocess_enum_type, False, False, factory.can_apply_to_val and self.has_val(), True, NoFitCustomHandlerHolder(val_handlers)))

        test_handlers = np.array([factory.create_handler() for _ in self.test_ts]) if self.has_test() else None
        self.test_preprocess_order.append(PreprocessNote(factory.preprocess_enum_type, False, False, factory.can_apply_to_test and self.has_test(), True, NoFitCustomHandlerHolder(test_handlers)))

        all_handlers = np.array([factory.create_handler() for _ in self.all_ts]) if self.has_all() else None
        self.all_preprocess_order.append(PreprocessNote(factory.preprocess_enum_type, False, False, factory.can_apply_to_all and self.has_all(), True, NoFitCustomHandlerHolder(all_handlers)))

    def _validate_finalization(self) -> None:
        """ Performs final validation of the configuration. Validates whether `train/val/test` are continuos."""

        self._validate_time_periods_overlap()
        self._validate_ts_overlap()

    def _get_summary_filter_time_series(self) -> css_utils.SummaryDiagramStep:
        attributes = [css_utils.StepAttribute("Train time series IDs", get_abbreviated_list_string(self.train_ts)),
                      css_utils.StepAttribute("Val time series IDs", get_abbreviated_list_string(self.val_ts)),
                      css_utils.StepAttribute("Test time series IDs", get_abbreviated_list_string(self.test_ts)),
                      css_utils.StepAttribute("Train time periods", self.display_train_time_period),
                      css_utils.StepAttribute("Val time periods", self.display_val_time_period),
                      css_utils.StepAttribute("Test time periods", self.display_test_time_period),
                      css_utils.StepAttribute("Nan threshold", self.nan_threshold)]

        return css_utils.SummaryDiagramStep("Filter time series", attributes)

    def _get_summary_loader(self) -> list[css_utils.SummaryDiagramStep]:

        steps = []

        if self.sliding_window_size is not None:
            attributes = [
                css_utils.StepAttribute("Window size", self.sliding_window_size),
                css_utils.StepAttribute("Prediction size", self.sliding_window_prediction_size),
                css_utils.StepAttribute("Step", self.sliding_window_step)
            ]

            steps.append(css_utils.SummaryDiagramStep("Apply sliding window", attributes))

        attributes = [css_utils.StepAttribute("Train batch size", self.train_batch_size),
                      css_utils.StepAttribute("Val batch size", self.val_batch_size),
                      css_utils.StepAttribute("Test batch size", self.test_batch_size)]

        steps.append(css_utils.SummaryDiagramStep("Transform into specific format", attributes))

        return steps

    def __str__(self) -> str:

        if self.transformer_factory.is_empty_factory:
            transformer_part = f"Transformer type: {self.transformer_factory.name}"
        else:
            transformer_part = f'''Transformer type: {self.transformer_factory.name}
        Are transformers premade: {self.transformer_factory.has_already_initialized}
        Are premade transformers partial_fitted: {self.partial_fit_initialized_transformers}'''

        if self.include_time:
            time_part = f'''Time included: {str(self.include_time)}    
        Time format: {str(self.time_format)}'''
        else:
            time_part = f"Time included: {str(self.include_time)}"

        return f'''
Config Details
    Used for database: {self.database_name}
    Aggregation: {str(self.aggregation)}
    Source: {str(self.source_type)}

    Time series
        Train time series IDs: {get_abbreviated_list_string(self.train_ts)}
        Val time series IDs: {get_abbreviated_list_string(self.val_ts)}
        Test time series IDs: {get_abbreviated_list_string(self.test_ts)}
    Time periods
        Train time periods: {str(self.display_train_time_period)}
        Val time periods: {str(self.display_val_time_period)}
        Test time periods: {str(self.display_test_time_period)}
    Features
        Taken features: {str(self.features_to_take_without_ids)}
        Default values: {self.default_values}
        Time series ID included: {str(self.include_ts_id)}
        {time_part}
    Sliding window
        Sliding window size: {self.sliding_window_size}
        Sliding window prediction size: {self.sliding_window_prediction_size}
        Sliding window step size: {self.sliding_window_step}
    Fillers
        Filler type: {self.filler_factory.name}
    Transformers
        {transformer_part}
    Anomaly handler
        Anomaly handler type (train set): {self.anomaly_handler_factory.name}
    Batch sizes
        Train batch size: {self.train_batch_size}
        Val batch size: {self.val_batch_size}
        Test batch size: {self.test_batch_size}
    Default workers
        Init worker count: {str(self.init_workers)}
        Train worker count: {str(self.train_workers)}
        Val worker count: {str(self.val_workers)}
        Test worker count: {str(self.test_workers)}
    Other
        Preprocess order: {normalize_display_list(self.preprocess_order)}
        Nan threshold: {str(self.nan_threshold)}
        Random state: {self.random_state}
        Version: {self.version}
                '''

Configuration options

Parameters:

Name Type Description Default
train_ts list[int] | NDArray[int_] | float | int | None

Defines which time series IDs are used in the training set. Can be a list of IDs, or an integer/float to specify a random selection. An int specifies the number of random time series, and a float specifies the proportion of available time series. int and float must be greater than 0, and a float should be smaller or equal to 1.0. Using int or float guarantees that no time series from other sets will be used. Must be used with train_time_period.

required
val_ts list[int] | NDArray[int_] | float | int | None

Defines which time series IDs are used in the validation set. Same as train_ts but for the validation set. Must be used with val_time_period.

required
test_ts list[int] | NDArray[int_] | float | int | None

Defines which time series IDs are used in the test set. Same as train_ts but for the test set. Must be used with test_time_period.

required
train_time_period tuple[datetime, datetime] | range | float | None

Defines the time period for training set. Can be a range of time IDs or a tuple of datetime objects. Float value is equivalent to percentage of available times with offseted position from previous used set. Must be used with train_ts. Default: None

None
val_time_period tuple[datetime, datetime] | range | float | None

Defines the time period for validation set. Can be a range of time IDs or a tuple of datetime objects. Float value is equivalent to percentage of available times with offseted position from previous used set. Must be used with val_ts. Default: None

None
test_time_period tuple[datetime, datetime] | range | float | None

Defines the time period for test set. Can be a range of time IDs or a tuple of datetime objects. Must be used with test_ts. Default: None

None
features_to_take list[str] | Literal['all']

Defines which features are used. Default: "all"

'all'
default_values list[Number] | NDArray[number] | dict[str, Number] | Number | Literal['default'] | None

Default values for missing data, applied before fillers. Can set one value for all features or specify for each feature. Default: "default"

'default'
sliding_window_size int | None

Number of times in one window. Impacts dataloader behavior. Batch sizes affects how much data will be cached for creating windows. Default: None

None
sliding_window_prediction_size int | None

Number of times to predict from sliding_window_size. Impacts dataloader behavior. Batch sizes affects how much data will be cached for creating windows. Default: None

None
sliding_window_step int

Number of times to move by after each window. Default: 1

1
set_shared_size float | int

How much times should time periods share. Order of sharing is training set < validation set < test set. Only in effect if sets share less values than set_shared_size. Use float value for percentage of total times or int for count. Default: 0

0
train_batch_size int

Batch size for the train dataloader. Affects number of returned times in one batch. Default: 32

32
val_batch_size int

Batch size for the validation dataloader. Affects number of returned times in one batch. Default: 64

64
test_batch_size int

Batch size for the test dataloader. Affects number of returned times in one batch. Default: 128

128
preprocess_order list[str, type]

Defines in which order preprocesses are used. Also can add to order a type of AllSeriesCustomHandler or NoFitCustomHandler. Default: ["handling_anomalies", "filling_gaps", "transforming"]

['handling_anomalies', 'filling_gaps', 'transforming']
fill_missing_with type | FillerType | Literal['mean_filler', 'forward_filler', 'linear_interpolation_filler'] | None

Defines how to fill missing values in the dataset. Can pass enum FillerType for built-in filler or pass a type of custom filler that must derive from Filler base class. Default: None

None
transform_with type | list[Transformer] | ndarray[Transformer] | TransformerType | Transformer | Literal['min_max_scaler', 'standard_scaler', 'max_abs_scaler', 'log_transformer', 'l2_normalizer'] | None

Defines the transformer used to transform the dataset. Can pass enum TransformerType, pass a type of custom transformer or instance of already fitted transformer(s). Default: None

None
handle_anomalies_with type | AnomalyHandlerType | Literal['z-score', 'interquartile_range'] | None

Defines the anomaly handler for handling anomalies in the train set. Can pass enum AnomalyHandlerType for built-in anomaly handler or a type of custom anomaly handler. Default: None

None
partial_fit_initialized_transformer bool

If True, partial fitting on train set is performed when using initiliazed transformers. Default: False

False
include_time bool

If True, time data is included in the returned values. Default: True

True
include_ts_id bool

If True, time series IDs are included in the returned values. Default: True

True
time_format TimeFormat | Literal['id_time', 'datetime', 'unix_time', 'shifted_unix_time']

Format for the returned time data. When using TimeFormat.DATETIME, time will be returned as separate list along rest of the values. Default: TimeFormat.ID_TIME

ID_TIME
train_workers int

Number of workers for loading training data. 0 means that the data will be loaded in the main process. Default: 4

4
val_workers int

Number of workers for loading validation data. 0 means that the data will be loaded in the main process. Default: 3

3
test_workers int

Number of workers for loading test. 0 means that the data will be loaded in the main process. Default: 2

2
init_workers int

Number of workers for initial dataset processing during configuration. 0 means that the data will be loaded in the main process. Default: 4

4
nan_threshold float

Maximum allowable percentage of missing data. Time series exceeding this threshold are excluded. Time series over the threshold will not be used. Used for train/val/test/all separately. Default: 1.0

1.0
random_state int | None

Fixes randomness for reproducibility during configuration and dataset initialization. Default: None

None

Config attributes

Attributes:

Name Type Description
used_train_workers Optional[int]

Tracks the number of train workers in use. Helps determine if the train dataloader should be recreated based on worker changes.

used_val_workers Optional[int]

Tracks the number of validation workers in use. Helps determine if the validation dataloader should be recreated based on worker changes.

used_test_workers Optional[int]

Tracks the number of test workers in use. Helps determine if the test dataloader should be recreated based on worker changes.

uses_all_time_period bool

Whether all time period set should be used.

uses_all_ts bool

Whether all time series set should be used.

import_identifier Optional[str]

Tracks the name of the config upon import. None if not imported.

filler_factory FillerFactory

Represents factory used to create passed Filler type.

anomaly_handler_factory AnomalyHandlerFactory

Represents factory used to create passed Anomaly Handler type.

transformer_factory TransformerFactory

Represents factory used to create passed Transformer type.

can_fit_fillers bool

Whether fillers in this config, can be fitted.

logger

Logger for displaying information.

display_train_time_period Optional[range]

Used to display the configured value of train_time_period.

display_val_time_period Optional[range]

Used to display the configured value of val_time_period.

display_test_time_period Optional[range]

Used to display the configured value of test_time_period.

train_ts_row_ranges Optional[ndarray]

Initialized when train_ts is set. Contains time series IDs in train set with their respective time ID ranges.

val_ts_row_ranges Optional[ndarray]

Initialized when val_ts is set. Contains time series IDs in validation set with their respective time ID ranges.

test_ts_row_ranges Optional[ndarray]

Initialized when test_ts is set. Contains time series IDs in test set with their respective time ID ranges.

all_time_period Optional[ndarray]

Contains total used time period.

all_ts Optional[ndarray]

Contains all used time series.

all_ts_row_ranges Optional[ndarray]

Contains time series IDs in all set with their respective time ID ranges.

aggregation Optional[AgreggationType]

The aggregation period used for the data.

source_type Optional[SourceType]

The source type of the data.

database_name Optional[str]

Specifies which database this config applies to.

features_to_take_without_ids Optional[ndarray]

Features to be returned, excluding time or time series IDs.

indices_of_features_to_take_no_ids Optional[ndarray]

Indices of non-ID features in features_to_take.

ts_id_name Optional[str]

Name of the time series ID, dependent on source_type.

used_singular_train_time_series Optional[int]

Currently used singular train set time series for dataloader.

used_singular_val_time_series Optional[int]

Currently used singular validation set time series for dataloader.

used_singular_test_time_series Optional[int]

Currently used singular test set time series for dataloader.

train_preprocess_order list[PreprocessNote]

All preprocesses used for train set.

val_preprocess_order list[PreprocessNote]

All preprocesses used for val set.

test_preprocess_order list[PreprocessNote]

All preprocesses used for test set.

is_initialized bool

Flag indicating if the configuration has already been initialized. If true, config initialization will be skipped.

version str

Version of cesnet-tszoo this config was made in.

export_update_needed bool

Whether config was updated to newer version and should be exported.

train_ts Optional[ndarray]

Defines which time series IDs are used in the training set. Can be a list of IDs, or an integer/float to specify a random selection. An int specifies the number of random time series, and a float specifies the proportion of available time series. int and float must be greater than 0, and a float should be smaller or equal to 1.0. Using int or float guarantees that no time series from other sets will be used. Must be used with train_time_period.

val_ts Optional[ndarray]

Defines which time series IDs are used in the validation set. Same as train_ts but for the validation set. Must be used with val_time_period.

test_ts Optional[ndarray]

Defines which time series IDs are used in the test set. Same as train_ts but for the test set. Must be used with test_time_period.

train_time_period Optional[ndarray]

Defines the time period for training set. Can be a range of time IDs or a tuple of datetime objects. Float value is equivalent to percentage of available times with offseted position from previous used set. Must be used with train_ts.

val_time_period Optional[ndarray]

Defines the time period for validation set. Can be a range of time IDs or a tuple of datetime objects. Float value is equivalent to percentage of available times with offseted position from previous used set. Must be used with val_ts.

test_time_period Optional[ndarray]

Defines the time period for test set. Can be a range of time IDs or a tuple of datetime objects. Must be used with test_ts.

features_to_take list[str]

Defines which features are used.

default_values ndarray

Default values for missing data, applied before fillers. Can set one value for all features or specify for each feature.

sliding_window_size Optional[int]

Number of times in one window. Impacts dataloader behavior. Batch sizes affects how much data will be cached for creating windows.

sliding_window_prediction_size Optional[int]

Number of times to predict from sliding_window_size. Impacts dataloader behavior. Batch sizes affects how much data will be cached for creating windows.

sliding_window_step int

Number of times to move by after each window.

set_shared_size int | float

How much times should time periods share. Order of sharing is training set < validation set < test set. Only in effect if sets share less values than set_shared_size. Use float value for percentage of total times or int for count.

train_batch_size int

Batch size for the train dataloader. Affects number of returned times in one batch.

val_batch_size int

Batch size for the validation dataloader. Affects number of returned times in one batch.

test_batch_size int

Batch size for the test dataloader. Affects number of returned times in one batch.

preprocess_order list[PreprocessType]

Defines in which order preprocesses are used. Also can add to order a type of AllSeriesCustomHandler or NoFitCustomHandler.

partial_fit_initialized_transformers bool

If True, partial fitting on train set is performed when using initiliazed transformers.

include_time bool

If True, time data is included in the returned values.

include_ts_id bool

If True, time series IDs are included in the returned values.

time_format TimeFormat

Format for the returned time data. When using TimeFormat.DATETIME, time will be returned as separate list along rest of the values.

train_workers int

Number of workers for loading training data. 0 means that the data will be loaded in the main process.

val_workers int

Number of workers for loading validation data. 0 means that the data will be loaded in the main process.

test_workers int

Number of workers for loading test. 0 means that the data will be loaded in the main process.

init_workers int

Number of workers for initial dataset processing during configuration. 0 means that the data will be loaded in the main process.

nan_threshold float

Maximum allowable percentage of missing data. Time series exceeding this threshold are excluded. Time series over the threshold will not be used. Used for train/val/test/all separately.

random_state Optional[int]

Fixes randomness for reproducibility during configuration and dataset initialization.