Skip to content

Logistic regression

load_model(path)

Load a Sklearn model from a .joblib file.

Parameters:

Name Type Description Default
path Path

Path from where the model should be loaded. Include the .joblib file extension.

required

Returns:

Type Description
BaseEstimator

Loaded model.

Source code in eis_toolkit/prediction/machine_learning_general.py
45
46
47
48
49
50
51
52
53
54
55
56
@beartype
def load_model(path: Path) -> BaseEstimator:
    """
    Load a Sklearn model from a .joblib file.

    Args:
        path: Path from where the model should be loaded. Include the .joblib file extension.

    Returns:
        Loaded model.
    """
    return joblib.load(path)

prepare_data_for_ml(feature_raster_files, label_file=None)

Prepare data ready for machine learning model training.

Performs the following steps: - Read all bands of all feature/evidence rasters into a stacked Numpy array - Read label data (and rasterize if a vector file is given) - Create a nodata mask using all feature rasters and labels, and mask nodata cells out

Parameters:

Name Type Description Default
feature_raster_files Sequence[Union[str, PathLike]]

List of filepaths of feature/evidence rasters. Files should only include raster that have the same grid properties and extent.

required
label_file Optional[Union[str, PathLike]]

Filepath to label (deposits) data. File can be either a vector file or raster file. If a vector file is provided, it will be rasterized into similar grid as feature rasters. If a raster file is provided, it needs to have same grid properties and extent as feature rasters. Optional parameter and can be omitted if preparing data for predicting. Defaults to None.

None

Returns:

Type Description
ndarray

Feature data (X) in prepared shape.

Optional[ndarray]

Target labels (y) in prepared shape (if label_file was given).

Profile

Refrence raster metadata .

Any

Nodata mask applied to X and y.

Raises:

Type Description
InvalidDatasetException

Input feature rasters contains only one path.

NonMatchingRasterMetadataException

Input feature rasters, and optionally rasterized label file, don't have same grid properties.

Source code in eis_toolkit/prediction/machine_learning_general.py
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
@beartype
def prepare_data_for_ml(
    feature_raster_files: Sequence[Union[str, os.PathLike]],
    label_file: Optional[Union[str, os.PathLike]] = None,
) -> Tuple[np.ndarray, Optional[np.ndarray], rasterio.profiles.Profile, Any]:
    """
    Prepare data ready for machine learning model training.

    Performs the following steps:
    - Read all bands of all feature/evidence rasters into a stacked Numpy array
    - Read label data (and rasterize if a vector file is given)
    - Create a nodata mask using all feature rasters and labels, and mask nodata cells out

    Args:
        feature_raster_files: List of filepaths of feature/evidence rasters. Files should only include
            raster that have the same grid properties and extent.
        label_file: Filepath to label (deposits) data. File can be either a vector file or raster file.
            If a vector file is provided, it will be rasterized into similar grid as feature rasters. If
            a raster file is provided, it needs to have same grid properties and extent as feature rasters.
            Optional parameter and can be omitted if preparing data for predicting. Defaults to None.

    Returns:
        Feature data (X) in prepared shape.
        Target labels (y) in prepared shape (if `label_file` was given).
        Refrence raster metadata .
        Nodata mask applied to X and y.

    Raises:
        InvalidDatasetException: Input feature rasters contains only one path.
        NonMatchingRasterMetadataException: Input feature rasters, and optionally rasterized label file,
            don't have same grid properties.
    """

    def _read_and_stack_feature_raster(filepath: Union[str, os.PathLike]) -> Tuple[np.ndarray, dict]:
        """Read all bands of raster file with feature/evidence data in a stack."""
        with rasterio.open(filepath) as src:
            raster_data = np.stack([src.read(i) for i in range(1, src.count + 1)])
            profile = src.profile
        return raster_data, profile

    if len(feature_raster_files) < 2:
        raise InvalidDatasetException(f"Expected more than one feature raster file: {len(feature_raster_files)}.")

    # Read and stack feature rasters
    feature_data, profiles = zip(*[_read_and_stack_feature_raster(file) for file in feature_raster_files])
    if not check_raster_grids(profiles, same_extent=True):
        raise NonMatchingRasterMetadataException("Input feature rasters should have same grid properties.")

    reference_profile = profiles[0]
    nodata_values = [profile["nodata"] for profile in profiles]

    # Reshape feature rasters for ML and create mask
    reshaped_data = []
    nodata_mask = None

    for raster, nodata in zip(feature_data, nodata_values):
        raster_reshaped = raster.reshape(raster.shape[0], -1).T
        reshaped_data.append(raster_reshaped)

        nan_mask = (raster_reshaped == np.nan).any(axis=1)
        combined_mask = nan_mask if nodata_mask is None else nodata_mask | nan_mask

        if nodata is not None:
            raster_mask = (raster_reshaped == nodata).any(axis=1)
            combined_mask = combined_mask | raster_mask

        nodata_mask = combined_mask

    X = np.concatenate(reshaped_data, axis=1)

    if label_file is not None:
        # Check label file type and process accordingly
        file_extension = os.path.splitext(label_file)[1].lower()

        # Labels/deposits in vector format
        if file_extension in [".shp", ".geojson", ".json", ".gpkg"]:
            y = rasterize_vector(geodataframe=gpd.read_file(label_file), raster_profile=reference_profile)

        # Labels/deposits in raster format
        else:
            with rasterio.open(label_file) as label_raster:
                y = label_raster.read(1)  # Assuming labels are in the first band
                label_nodata = label_raster.nodata
                profiles = list(profiles)
                profiles.append(label_raster.profile)
                if not check_raster_grids(profiles, same_extent=True):
                    raise NonMatchingRasterMetadataException(
                        "Label raster should have the same grid properties as feature rasters."
                    )

            label_nodata_mask = y == label_nodata

            # Combine masks and apply to feature and label data
            nodata_mask = nodata_mask | label_nodata_mask.ravel()

        y = y.ravel()[~nodata_mask]

    else:
        y = None

    X = X[~nodata_mask]

    return X, y, reference_profile, nodata_mask

read_data_for_evaluation(rasters)

Prepare data ready for evaluating modeling outputs.

Reads all rasters (only first band), reshapes them (flattens) and masks out all NaN and nodata pixels by creating a combined mask from all input rasters.

Parameters:

Name Type Description Default
rasters Sequence[Union[str, PathLike]]

List of filepaths of input rasters. Files should only include raster that have the same grid properties and extent.

required

Returns:

Type Description
Sequence[ndarray]

List of reshaped and masked raster data.

Profile

Refrence raster profile.

Any

Nodata mask applied to raster data.

Raises:

Type Description
InvalidDatasetException

Input rasters contains only one path.

NonMatchingRasterMetadataException

Input rasters don't have same grid properties.

Source code in eis_toolkit/prediction/machine_learning_general.py
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
@beartype
def read_data_for_evaluation(
    rasters: Sequence[Union[str, os.PathLike]]
) -> Tuple[Sequence[np.ndarray], rasterio.profiles.Profile, Any]:
    """
    Prepare data ready for evaluating modeling outputs.

    Reads all rasters (only first band), reshapes them (flattens) and masks out all NaN
    and nodata pixels by creating a combined mask from all input rasters.

    Args:
        rasters: List of filepaths of input rasters. Files should only include raster that have
            the same grid properties and extent.

    Returns:
        List of reshaped and masked raster data.
        Refrence raster profile.
        Nodata mask applied to raster data.

    Raises:
        InvalidDatasetException: Input rasters contains only one path.
        NonMatchingRasterMetadataException: Input rasters don't have same grid properties.
    """
    if len(rasters) < 2:
        raise InvalidDatasetException(f"Expected more than one raster file: {len(rasters)}.")

    profiles = []
    raster_data = []
    nodata_values = []

    for raster in rasters:
        with rasterio.open(raster) as src:
            data = src.read(1)
            profile = src.profile
            profiles.append(profile)
            raster_data.append(data)
            nodata_values.append(profile.get("nodata"))

    if not check_raster_grids(profiles, same_extent=True):
        raise NonMatchingRasterMetadataException(f"Input rasters should have the same grid properties: {profiles}.")

    reference_profile = profiles[0]
    nodata_mask = None

    for data, nodata in zip(raster_data, nodata_values):
        nan_mask = np.isnan(data)
        combined_mask = nan_mask if nodata_mask is None else nodata_mask | nan_mask

        if nodata is not None:
            raster_mask = data == nodata
            combined_mask = combined_mask | raster_mask

        nodata_mask = combined_mask
    nodata_mask = nodata_mask.flatten()

    masked_data = []
    for data in raster_data:
        flattened_data = data.flatten()
        masked_data.append(flattened_data[~nodata_mask])

    return masked_data, reference_profile, nodata_mask

reshape_predictions(predictions, height, width, nodata_mask=None)

Reshape 1D prediction ouputs into 2D Numpy array.

The output is ready to be visualized and saved as a raster.

Parameters:

Name Type Description Default
predictions ndarray

A 1D Numpy array with raw prediction data from predict function.

required
height int

Height of the output array

required
width int

Width of the output array

required
nodata_mask Optional[ndarray]

Nodata mask used to reconstruct original shape of data. This is the same mask applied to data before predicting to remove nodata. If any nodata was removed before predicting, this mask is required to reconstruct the original shape of data. Defaults to None.

None

Returns:

Type Description
ndarray

Predictions as a 2D Numpy array in the original array shape.

Source code in eis_toolkit/prediction/machine_learning_general.py
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
@beartype
def reshape_predictions(
    predictions: np.ndarray, height: int, width: int, nodata_mask: Optional[np.ndarray] = None
) -> np.ndarray:
    """
    Reshape 1D prediction ouputs into 2D Numpy array.

    The output is ready to be visualized and saved as a raster.

    Args:
        predictions: A 1D Numpy array with raw prediction data from `predict` function.
        height: Height of the output array
        width: Width of the output array
        nodata_mask: Nodata mask used to reconstruct original shape of data. This is the same mask
            applied to data before predicting to remove nodata. If any nodata was removed
            before predicting, this mask is required to reconstruct the original shape of data.
            Defaults to None.

    Returns:
        Predictions as a 2D Numpy array in the original array shape.
    """
    full_predictions_array = np.full(width * height, np.nan, dtype=predictions.dtype)
    if nodata_mask is not None:
        full_predictions_array[~nodata_mask.ravel()] = predictions
    predictions_reshaped = full_predictions_array.reshape((height, width))
    return predictions_reshaped

save_model(model, path)

Save a trained Sklearn model to a .joblib file.

Parameters:

Name Type Description Default
model BaseEstimator

Trained model.

required
path Path

Path where the model should be saved. Include the .joblib file extension.

required
Source code in eis_toolkit/prediction/machine_learning_general.py
33
34
35
36
37
38
39
40
41
42
@beartype
def save_model(model: BaseEstimator, path: Path) -> None:
    """
    Save a trained Sklearn model to a .joblib file.

    Args:
        model: Trained model.
        path: Path where the model should be saved. Include the .joblib file extension.
    """
    joblib.dump(model, path)

split_data(*data, split_size=0.2, random_state=None, shuffle=True)

Split data into two parts. Can be used for train-test or train-validation splits.

For more guidance, read documentation of sklearn.model_selection.train_test_split: (https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html).

Parameters:

Name Type Description Default
*data Union[ndarray, DataFrame, csr_matrix, List[Number]]

Data to be split. Multiple datasets can be given as input (for example X and y), but they need to have the same length. All datasets are split into two and the parts returned (for example X_train, X_test, y_train, y_test).

()
split_size float

The proportion of the second part of the split. Typically this is the size of test/validation part. The first part will be complemental proportion. For example, if split_size = 0.2, the first part will have 80% of the data and the second part 20% of the data. Defaults to 0.2.

0.2
random_state Optional[int]

Seed for random number generation. Defaults to None.

None
shuffle bool

If data is shuffled before splitting. Defaults to True.

True

Returns:

Type Description
List[Union[ndarray, DataFrame, csr_matrix, List[Number]]]

List containing splits of inputs (two outputs per input).

Source code in eis_toolkit/prediction/machine_learning_general.py
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
@beartype
def split_data(
    *data: Union[np.ndarray, pd.DataFrame, sparse._csr.csr_matrix, List[Number]],
    split_size: float = 0.2,
    random_state: Optional[int] = None,
    shuffle: bool = True,
) -> List[Union[np.ndarray, pd.DataFrame, sparse._csr.csr_matrix, List[Number]]]:
    """
    Split data into two parts. Can be used for train-test or train-validation splits.

    For more guidance, read documentation of sklearn.model_selection.train_test_split:
    (https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html).

    Args:
        *data: Data to be split. Multiple datasets can be given as input (for example X and y),
            but they need to have the same length. All datasets are split into two and the parts returned
            (for example X_train, X_test, y_train, y_test).
        split_size: The proportion of the second part of the split. Typically this is the size of test/validation
            part. The first part will be complemental proportion. For example, if split_size = 0.2, the first part
            will have 80% of the data and the second part 20% of the data. Defaults to 0.2.
        random_state: Seed for random number generation. Defaults to None.
        shuffle: If data is shuffled before splitting. Defaults to True.

    Returns:
        List containing splits of inputs (two outputs per input).
    """

    if not (0 < split_size < 1):
        raise InvalidParameterValueException("Split size must be more than 0 and less than 1.")

    split_data = train_test_split(*data, test_size=split_size, random_state=random_state, shuffle=shuffle)

    return split_data