Skip to content

Logistic regression

evaluate_model(X_test, y_test, model, metrics=None)

Evaluate/score a trained model with test data.

Predicts with the given test data and evaluates the predictions.

Parameters:

Name Type Description Default
X_test Union[ndarray, DataFrame]

Test data.

required
y_test Union[ndarray, Series]

Target labels for test data.

required
model Union[BaseEstimator, Model]

Trained Sklearn classifier or regressor.

required
metrics Optional[Sequence[Literal[mse, rmse, mae, r2, accuracy, precision, recall, f1]]]

Metrics to use for scoring the model. Defaults to "accuracy" for a classifier and to "mse" for a regressor.

None

Returns:

Type Description
Tuple[ndarray, Dict[str, Number]]

Predictions and metric scores as a dictionary.

Source code in eis_toolkit/prediction/machine_learning_general.py
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
@beartype
def evaluate_model(
    X_test: Union[np.ndarray, pd.DataFrame],
    y_test: Union[np.ndarray, pd.Series],
    model: Union[BaseEstimator, keras.Model],
    metrics: Optional[Sequence[Literal["mse", "rmse", "mae", "r2", "accuracy", "precision", "recall", "f1"]]] = None,
) -> Tuple[np.ndarray, Dict[str, Number]]:
    """
    Evaluate/score a trained model with test data.

    Predicts with the given test data and evaluates the predictions.

    Args:
        X_test: Test data.
        y_test: Target labels for test data.
        model: Trained Sklearn classifier or regressor.
        metrics: Metrics to use for scoring the model. Defaults to "accuracy" for a classifier
            and to "mse" for a regressor.

    Returns:
        Predictions and metric scores as a dictionary.
    """
    x_size = X_test.index.size if isinstance(X_test, pd.DataFrame) else X_test.shape[0]
    if x_size != y_test.size:
        raise NonMatchingParameterLengthsException(f"X and y must have the length {x_size} != {y_test.size}.")

    if metrics is None:
        metrics = ["accuracy"] if is_classifier(model) else ["mse"]

    y_pred = model.predict(X_test)

    out_metrics = {}
    for metric in metrics:
        score = _score_model(model, y_test, y_pred, metric)
        out_metrics[metric] = score

    return y_pred, out_metrics

load_model(path)

Load a Sklearn model from a .joblib file.

Parameters:

Name Type Description Default
path Path

Path from where the model should be loaded. Include the .joblib file extension.

required

Returns:

Type Description
BaseEstimator

Loaded model.

Source code in eis_toolkit/prediction/machine_learning_general.py
53
54
55
56
57
58
59
60
61
62
63
64
@beartype
def load_model(path: Path) -> BaseEstimator:
    """
    Load a Sklearn model from a .joblib file.

    Args:
        path: Path from where the model should be loaded. Include the .joblib file extension.

    Returns:
        Loaded model.
    """
    return joblib.load(path)

predict(data, model)

Predict with a trained model.

Parameters:

Name Type Description Default
data Union[ndarray, DataFrame]

Data used to make predictions.

required
model Union[BaseEstimator, Model]

Trained classifier or regressor. Can be any machine learning model trained with EIS Toolkit (Sklearn and Keras models).

required

Returns:

Type Description
ndarray

Predictions.

Source code in eis_toolkit/prediction/machine_learning_general.py
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
@beartype
def predict(data: Union[np.ndarray, pd.DataFrame], model: Union[BaseEstimator, keras.Model]) -> np.ndarray:
    """
    Predict with a trained model.

    Args:
        data: Data used to make predictions.
        model: Trained classifier or regressor. Can be any machine learning model trained with
            EIS Toolkit (Sklearn and Keras models).

    Returns:
        Predictions.
    """
    result = model.predict(data)
    return result

prepare_data_for_ml(feature_raster_files, label_file=None)

Prepare data ready for machine learning model training.

Performs the following steps: - Read all bands of all feature/evidence rasters into a stacked Numpy array - Read label data (and rasterize if a vector file is given) - Create a nodata mask using all feature rasters and labels, and mask nodata cells out

Parameters:

Name Type Description Default
feature_raster_files Sequence[Union[str, PathLike]]

List of filepaths of feature/evidence rasters. Files should only include raster that have the same grid properties and extent.

required
label_file Optional[Union[str, PathLike]]

Filepath to label (deposits) data. File can be either a vector file or raster file. If a vector file is provided, it will be rasterized into similar grid as feature rasters. If a raster file is provided, it needs to have same grid properties and extent as feature rasters. Optional parameter and can be omitted if preparing data for predicting. Defaults to None.

None

Returns:

Type Description
ndarray

Feature data (X) in prepared shape.

Optional[ndarray]

Target labels (y) in prepared shape (if label_file was given).

Profile

Refrence raster metadata .

Any

Nodata mask applied to X and y.

Raises:

Type Description
NonMatchingRasterMetadataException

Input feature rasters don't have same grid properties.

Source code in eis_toolkit/prediction/machine_learning_general.py
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
@beartype
def prepare_data_for_ml(
    feature_raster_files: Sequence[Union[str, os.PathLike]],
    label_file: Optional[Union[str, os.PathLike]] = None,
) -> Tuple[np.ndarray, Optional[np.ndarray], rasterio.profiles.Profile, Any]:
    """
    Prepare data ready for machine learning model training.

    Performs the following steps:
    - Read all bands of all feature/evidence rasters into a stacked Numpy array
    - Read label data (and rasterize if a vector file is given)
    - Create a nodata mask using all feature rasters and labels, and mask nodata cells out

    Args:
        feature_raster_files: List of filepaths of feature/evidence rasters. Files should only include
            raster that have the same grid properties and extent.
        label_file: Filepath to label (deposits) data. File can be either a vector file or raster file.
            If a vector file is provided, it will be rasterized into similar grid as feature rasters. If
            a raster file is provided, it needs to have same grid properties and extent as feature rasters.
            Optional parameter and can be omitted if preparing data for predicting. Defaults to None.

    Returns:
        Feature data (X) in prepared shape.
        Target labels (y) in prepared shape (if `label_file` was given).
        Refrence raster metadata .
        Nodata mask applied to X and y.

    Raises:
        NonMatchingRasterMetadataException: Input feature rasters don't have same grid properties.
    """

    def _read_and_stack_feature_raster(filepath: Union[str, os.PathLike]) -> Tuple[np.ndarray, dict]:
        """Read all bands of raster file with feature/evidence data in a stack."""
        with rasterio.open(filepath) as src:
            raster_data = np.stack([src.read(i) for i in range(1, src.count + 1)])
            profile = src.profile
        return raster_data, profile

    # Read and stack feature rasters
    feature_data, profiles = zip(*[_read_and_stack_feature_raster(file) for file in feature_raster_files])
    if not check_raster_grids(profiles, same_extent=True):
        raise NonMatchingRasterMetadataException("Input feature rasters should have same grid properties.")

    reference_profile = profiles[0]
    nodata_values = [profile["nodata"] for profile in profiles]

    # Reshape feature rasters for ML and create mask
    reshaped_data = []
    nodata_mask = None

    for raster, nodata in zip(feature_data, nodata_values):
        raster_reshaped = raster.reshape(raster.shape[0], -1).T
        reshaped_data.append(raster_reshaped)

        if nodata is not None:
            raster_mask = (raster_reshaped == nodata).any(axis=1)
            nodata_mask = raster_mask if nodata_mask is None else nodata_mask | raster_mask

    X = np.concatenate(reshaped_data, axis=1)

    if label_file is not None:
        # Check label file type and process accordingly
        file_extension = os.path.splitext(label_file)[1].lower()

        # Labels/deposits in vector format
        if file_extension in [".shp", ".geojson", ".json", ".gpkg"]:
            y, _ = rasterize_vector(geodataframe=gpd.read_file(label_file), base_raster_profile=reference_profile)

        # Labels/deposits in raster format
        else:
            with rasterio.open(label_file) as label_raster:
                y = label_raster.read(1)  # Assuming labels are in the first band
                label_nodata = label_raster.nodata

            label_nodata_mask = y == label_nodata

            # Combine masks and apply to feature and label data
            nodata_mask = nodata_mask | label_nodata_mask.ravel()

        y = y.ravel()[~nodata_mask]

    else:
        y = None

    X = X[~nodata_mask]

    return X, y, reference_profile, nodata_mask

reshape_predictions(predictions, height, width, nodata_mask=None)

Reshape 1D prediction ouputs into 2D Numpy array.

The output is ready to be visualized and saved as a raster.

Parameters:

Name Type Description Default
predictions ndarray

A 1D Numpy array with raw prediction data from predict function.

required
height int

Height of the output array

required
width int

Width of the output array

required
nodata_mask Optional[ndarray]

Nodata mask used to reconstruct original shape of data. This is the same mask applied to data before predicting to remove nodata. If any nodata was removed before predicting, this mask is required to reconstruct the original shape of data. Defaults to None.

None

Returns:

Type Description
ndarray

Predictions as a 2D Numpy array in the original array shape.

Source code in eis_toolkit/prediction/machine_learning_general.py
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
@beartype
def reshape_predictions(
    predictions: np.ndarray, height: int, width: int, nodata_mask: Optional[np.ndarray] = None
) -> np.ndarray:
    """
    Reshape 1D prediction ouputs into 2D Numpy array.

    The output is ready to be visualized and saved as a raster.

    Args:
        predictions: A 1D Numpy array with raw prediction data from `predict` function.
        height: Height of the output array
        width: Width of the output array
        nodata_mask: Nodata mask used to reconstruct original shape of data. This is the same mask
            applied to data before predicting to remove nodata. If any nodata was removed
            before predicting, this mask is required to reconstruct the original shape of data.
            Defaults to None.

    Returns:
        Predictions as a 2D Numpy array in the original array shape.
    """
    full_predictions_array = np.full(width * height, np.nan, dtype=predictions.dtype)
    if nodata_mask is not None:
        full_predictions_array[~nodata_mask.ravel()] = predictions
    predictions_reshaped = full_predictions_array.reshape((height, width))
    return predictions_reshaped

save_model(model, path)

Save a trained Sklearn model to a .joblib file.

Parameters:

Name Type Description Default
model BaseEstimator

Trained model.

required
path Path

Path where the model should be saved. Include the .joblib file extension.

required
Source code in eis_toolkit/prediction/machine_learning_general.py
41
42
43
44
45
46
47
48
49
50
@beartype
def save_model(model: BaseEstimator, path: Path) -> None:
    """
    Save a trained Sklearn model to a .joblib file.

    Args:
        model: Trained model.
        path: Path where the model should be saved. Include the .joblib file extension.
    """
    joblib.dump(model, path)

split_data(*data, split_size=0.2, random_state=None, shuffle=True)

Split data into two parts. Can be used for train-test or train-validation splits.

For more guidance, read documentation of sklearn.model_selection.train_test_split: (https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html).

Parameters:

Name Type Description Default
*data Union[ndarray, DataFrame, csr_matrix, List[Number]]

Data to be split. Multiple datasets can be given as input (for example X and y), but they need to have the same length. All datasets are split into two and the parts returned (for example X_train, X_test, y_train, y_test).

()
split_size float

The proportion of the second part of the split. Typically this is the size of test/validation part. The first part will be complemental proportion. For example, if split_size = 0.2, the first part will have 80% of the data and the second part 20% of the data. Defaults to 0.2.

0.2
random_state Optional[int]

Seed for random number generation. Defaults to None.

None
shuffle bool

If data is shuffled before splitting. Defaults to True.

True

Returns:

Type Description
List[Union[ndarray, DataFrame, csr_matrix, List[Number]]]

List containing splits of inputs (two outputs per input).

Source code in eis_toolkit/prediction/machine_learning_general.py
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
@beartype
def split_data(
    *data: Union[np.ndarray, pd.DataFrame, sparse._csr.csr_matrix, List[Number]],
    split_size: float = 0.2,
    random_state: Optional[int] = None,
    shuffle: bool = True,
) -> List[Union[np.ndarray, pd.DataFrame, sparse._csr.csr_matrix, List[Number]]]:
    """
    Split data into two parts. Can be used for train-test or train-validation splits.

    For more guidance, read documentation of sklearn.model_selection.train_test_split:
    (https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html).

    Args:
        *data: Data to be split. Multiple datasets can be given as input (for example X and y),
            but they need to have the same length. All datasets are split into two and the parts returned
            (for example X_train, X_test, y_train, y_test).
        split_size: The proportion of the second part of the split. Typically this is the size of test/validation
            part. The first part will be complemental proportion. For example, if split_size = 0.2, the first part
            will have 80% of the data and the second part 20% of the data. Defaults to 0.2.
        random_state: Seed for random number generation. Defaults to None.
        shuffle: If data is shuffled before splitting. Defaults to True.

    Returns:
        List containing splits of inputs (two outputs per input).
    """

    if not (0 < split_size < 1):
        raise InvalidParameterValueException("Split size must be more than 0 and less than 1.")

    split_data = train_test_split(*data, test_size=split_size, random_state=random_state, shuffle=shuffle)

    return split_data