Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds SLF001 to ruff - private member access #435

Merged
merged 9 commits into from
Aug 7, 2024
12 changes: 8 additions & 4 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ pip install -e .[all,dev]

Before you commit any code, please perform the following checks using [Nox](https://nox.thea.codes/en/stable/index.html):

- [All tests pass](#testing): `$ nox -s unit`
- [All tests pass](#testing): `$ nox -s quick`

### Installing and using pre-commit

Expand Down Expand Up @@ -83,16 +83,20 @@ PyBOP follows the [PEP8 recommendations](https://www.python.org/dev/peps/pep-000

### Ruff

We use [ruff](https://github.com/charliermarsh/ruff) to check our PEP8 adherence. To try this on your system, navigate to the PyBOP directory in a console and type
We use [ruff](https://github.com/charliermarsh/ruff) to lint and ensure adherence to Python PEP standards. To manually trigger `ruff`, navigate to the PyBOP directory in a console and type

```bash
python -m pip install pre-commit
pre-commit run ruff
```

ruff is configured inside the file `pre-commit-config.yaml`, allowing us to ignore some errors. If you think this should be added or removed, please submit an [issue](https://guides.github.com/features/issues/).
ruff is configured inside the file `pyproject.toml`, allowing us to ignore some errors. If you think a rule should be added or removed, please submit an [issue](https://guides.github.com/features/issues/).

When you commit your changes they will be checked against ruff automatically (see [Pre-commit checks](#pre-commit-checks)).
When you commit your changes they will be checked against ruff automatically (see [Pre-commit checks](#pre-commit-checks)). If you are having issues getting your commit to pass the linting, it
is possible to skip linting for single lines (this should only be done as a **last resort**) by adding a line comment of `#noqa: $ruff_rule` where the `$ruff_rule` is replaced with the rule in question.
BradyPlanden marked this conversation as resolved.
Show resolved Hide resolved
It is also possible to skip linting altogether by committing your changes by using the
`--no-verify` command-line flag.
These rules can be found in the ruff configuration in `pyproject.toml` or in the failed pre-commit output. Please note the lint skipping in the pull request for reviewers.

### Naming

Expand Down
4 changes: 2 additions & 2 deletions benchmarks/benchmark_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ def time_model_simulate(self, model, parameter_set):
model (pybop.Model): The model class being benchmarked.
parameter_set (str): The name of the parameter set being used.
"""
self.problem._model.simulate(inputs=self.inputs, t_eval=self.t_eval)
self.problem.model.simulate(inputs=self.inputs, t_eval=self.t_eval)

def time_model_simulateS1(self, model, parameter_set):
"""
Expand All @@ -89,4 +89,4 @@ def time_model_simulateS1(self, model, parameter_set):
model (pybop.Model): The model class being benchmarked.
parameter_set (str): The name of the parameter set being used.
"""
self.problem._model.simulateS1(inputs=self.inputs, t_eval=self.t_eval)
self.problem.model.simulateS1(inputs=self.inputs, t_eval=self.t_eval)
2 changes: 1 addition & 1 deletion examples/notebooks/optimiser_calibration.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -482,7 +482,7 @@
"source": [
"for optim, sigma in zip(optims, sigmas):\n",
" print(\n",
" f\"| Sigma: {sigma} | Num Iterations: {optim._iterations} | Best Cost: {optim.pints_optimiser.f_best()} | Results: {optim.pints_optimiser.x_best()} |\"\n",
" f\"| Sigma: {sigma} | Num Iterations: {optim.result.n_iterations} | Best Cost: {optim.pints_optimiser.f_best()} | Results: {optim.pints_optimiser.x_best()} |\"\n",
" )"
]
},
Expand Down
2 changes: 1 addition & 1 deletion examples/notebooks/spm_electrode_design.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -276,7 +276,7 @@
],
"source": [
"if cost.update_capacity:\n",
" problem._model.approximate_capacity(x)\n",
" problem.model.approximate_capacity(x)\n",
"pybop.quick_plot(problem, problem_inputs=x, title=\"Optimised Comparison\");"
]
},
Expand Down
2 changes: 1 addition & 1 deletion examples/scripts/exp_UKF.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@
# Verification step: make another prediction using the Observer class
model.build(parameters=parameters)
simulator = pybop.Observer(parameters, model, signal=["2y"])
simulator._time_data = t_eval
simulator.time_data = t_eval
measurements = simulator.evaluate(true_inputs)

# Verification step: Compare by plotting
Expand Down
2 changes: 1 addition & 1 deletion examples/scripts/spme_max_energy.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@

# Plot the timeseries output
if cost.update_capacity:
problem._model.approximate_capacity(x)
problem.model.approximate_capacity(x)
pybop.quick_plot(problem, problem_inputs=x, title="Optimised Comparison")

# Plot the cost landscape with optimisation path
Expand Down
4 changes: 2 additions & 2 deletions examples/standalone/cost.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,9 +43,9 @@ def __init__(self, problem=None):
)
self.x0 = self.parameters.initial_value()

def _evaluate(self, inputs):
def compute(self, inputs):
BradyPlanden marked this conversation as resolved.
Show resolved Hide resolved
"""
Calculate the cost for a given parameter value.
Compute the cost for a given parameter value.

The cost function is defined as cost(x) = x^2 + 42, where x is the
parameter value.
Expand Down
50 changes: 33 additions & 17 deletions pybop/costs/_likelihoods.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,10 +39,13 @@ def __init__(self, problem: BaseProblem, sigma0: Union[list[float], float]):
self._offset = -0.5 * self.n_time_data * np.log(2 * np.pi * self.sigma2)
self._multip = -1 / (2.0 * self.sigma2)

def _evaluate(self, inputs: Inputs) -> float:
def compute(self, inputs: Inputs) -> float:
"""
Evaluates the Gaussian log-likelihood for the given parameters with known sigma.
Compute the Gaussian log-likelihood for the given parameters with known sigma.

This method only computes the likelihood, without calling the problem.evaluateS1.
"""

if not self.verify_prediction(self.y):
return -np.inf

Expand All @@ -59,14 +62,16 @@ def _evaluate(self, inputs: Inputs) -> float:

return e.item() if self.n_outputs == 1 else np.sum(e)

def _evaluateS1(self, inputs: Inputs) -> tuple[float, np.ndarray]:
def computeS1(self, inputs: Inputs) -> tuple[float, np.ndarray]:
"""
Calculates the log-likelihood and gradient.
Compute the Gaussian log-likelihood and it's gradient for the given parameters with known sigma.

This method only computes the likelihood, without calling the problem.evaluateS1.
"""
if not self.verify_prediction(self.y):
return -np.inf, -self._de * np.ones(self.n_parameters)

likelihood = self._evaluate(inputs)
likelihood = self.compute(inputs)

r = np.asarray(
[self._target[signal] - self.y[signal] for signal in self.signal]
Expand Down Expand Up @@ -168,9 +173,11 @@ def dsigma_scale(self, new_value):
raise ValueError("dsigma_scale must be non-negative")
self._dsigma_scale = new_value

def _evaluate(self, inputs: Inputs) -> float:
def compute(self, inputs: Inputs) -> float:
"""
Evaluates the Gaussian log-likelihood for the given parameters.
Compute the Gaussian log-likelihood for the given parameters.

This method only computes the likelihood, without calling the problem.evaluateS1.

Parameters
----------
Expand Down Expand Up @@ -207,9 +214,11 @@ def _evaluate(self, inputs: Inputs) -> float:

return e.item() if self.n_outputs == 1 else np.sum(e)

def _evaluateS1(self, inputs: Inputs) -> tuple[float, np.ndarray]:
def computeS1(self, inputs: Inputs) -> tuple[float, np.ndarray]:
"""
Calculates the log-likelihood and sensitivities.
Compute the Gaussian log-likelihood and it's gradient for the given parameters.

This method only computes the likelihood, without calling the problem.evaluateS1.

Parameters
----------
Expand All @@ -231,7 +240,7 @@ def _evaluateS1(self, inputs: Inputs) -> tuple[float, np.ndarray]:
if not self.verify_prediction(self.y):
return -np.inf, -self._de * np.ones(self.n_parameters)

likelihood = self._evaluate(inputs)
likelihood = self.compute(inputs)

r = np.asarray(
[self._target[signal] - self.y[signal] for signal in self.signal]
Expand Down Expand Up @@ -279,16 +288,20 @@ def __init__(self, problem, likelihood, sigma0=None, gradient_step=1e-3):
raise ValueError(f"{self.likelihood} must be a subclass of BaseLikelihood")

self.parameters = self.likelihood.parameters
self._has_separable_problem = self.likelihood._has_separable_problem
self._has_separable_problem = self.likelihood.has_separable_problem

def _evaluate(self, inputs: Inputs) -> float:
def compute(self, inputs: Inputs) -> float:
"""
Calculate the maximum a posteriori cost for a given set of parameters.
Compute the Maximum a Posteriori for the given parameters.

If self._has_separable_problem is True, then this method only computes the
likelihood, without calling the problem.evaluate(). Else, problem.evaluate()
is called before computing the likelihood.

Parameters
----------
inputs : Inputs
The parameters for which to evaluate the cost.
The parameters for which to compute the cost.

Returns
-------
Expand All @@ -309,10 +322,13 @@ def _evaluate(self, inputs: Inputs) -> float:
posterior = log_likelihood + log_prior
return posterior

def _evaluateS1(self, inputs: Inputs) -> tuple[float, np.ndarray]:
def computeS1(self, inputs: Inputs) -> tuple[float, np.ndarray]:
"""
Compute the maximum a posteriori with respect to the parameters.
The method passes the likelihood gradient to the optimiser without modification.
Compute the Maximum a Posteriori, and it's gradient for the given parameters.

If self._has_separable_problem is True, then this method only computes the
likelihood, without calling the problem.evaluateS1(). Else, problem.evaluateS1()
is called before computing the likelihood.

Parameters
----------
Expand Down
18 changes: 9 additions & 9 deletions pybop/costs/_weighted_cost.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ class WeightedCost(BaseCost):
A list of values with which to weight the cost values.
_has_identical_problems : bool
If True, the shared problem will be evaluated once and saved before the
self._evaluate() method of each cost is called (default: False).
self.compute() method of each cost is called (default: False).
_has_separable_problem: bool
If True, the shared problem is seperable from the cost function and
will be evaluated for each problem before the cost evaluation is
Expand Down Expand Up @@ -54,7 +54,7 @@ def __init__(self, *costs, weights: Optional[list[float]] = None):

# Check if all costs depend on the same problem
self._has_identical_problems = all(
cost._has_separable_problem and cost.problem is self.costs[0].problem
cost.has_separable_problem and cost.problem is self.costs[0].problem
for cost in self.costs
)

Expand All @@ -80,9 +80,9 @@ def __init__(self, *costs, weights: Optional[list[float]] = None):
# Weighted costs do not use this functionality
self._has_separable_problem = False

def _evaluate(self, inputs: Inputs):
def compute(self, inputs: Inputs):
"""
Calculate the weighted cost for a given set of parameters.
Compute the weighted cost for a given set of parameters.

Parameters
----------
Expand All @@ -105,15 +105,15 @@ def _evaluate(self, inputs: Inputs):
inputs = cost.parameters.as_dict()
if self._has_identical_problems:
cost.y = self.y
elif cost._has_separable_problem:
elif cost.has_separable_problem:
cost.y = cost.problem.evaluate(
inputs, update_capacity=self.update_capacity
)
e[i] = cost._evaluate(inputs)
e[i] = cost.compute(inputs)

return np.dot(e, self.weights)

def _evaluateS1(self, inputs: Inputs):
def computeS1(self, inputs: Inputs):
"""
Compute the weighted cost and its gradient with respect to the parameters.

Expand All @@ -140,9 +140,9 @@ def _evaluateS1(self, inputs: Inputs):
inputs = cost.parameters.as_dict()
if self._has_identical_problems:
cost.y, cost.dy = (self.y, self.dy)
elif cost._has_separable_problem:
elif cost.has_separable_problem:
cost.y, cost.dy = cost.problem.evaluateS1(inputs)
e[i], de[:, i] = cost._evaluateS1(inputs)
e[i], de[:, i] = cost.computeS1(inputs)

e = np.dot(e, self.weights)
de = np.dot(de, self.weights)
Expand Down
36 changes: 25 additions & 11 deletions pybop/costs/base_cost.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ class BaseCost:
The number of outputs in the model.
_has_separable_problem : bool
If True, the problem is separable from the cost function and will be
evaluated in advance of the call to self._evaluate() (default: False).
evaluated in advance of the call to self.compute() (default: False).
"""

def __init__(self, problem: Optional[BaseProblem] = None):
Expand All @@ -38,7 +38,7 @@ def __init__(self, problem: Optional[BaseProblem] = None):
self.dy = None
self.set_fail_gradient()
if isinstance(self.problem, BaseProblem):
self._target = self.problem._target
self._target = self.problem.target
self.parameters.join(self.problem.parameters)
self.n_outputs = self.problem.n_outputs
self.signal = self.problem.signal
Expand All @@ -53,6 +53,10 @@ def n_parameters(self):
def has_separable_problem(self):
return self._has_separable_problem

@property
def target(self):
return self._target

def __call__(self, inputs: Union[Inputs, list]):
"""
Call the evaluate function for a given set of parameters.
Expand All @@ -61,7 +65,8 @@ def __call__(self, inputs: Union[Inputs, list]):

def evaluate(self, inputs: Union[Inputs, list]):
"""
Call the evaluate function for a given set of parameters.
This method calls the forward model via problem.evaluateS1(inputs),
and computes the cost for the given output by calling self.computeS1(inputs).

Parameters
----------
Expand All @@ -88,24 +93,25 @@ def evaluate(self, inputs: Union[Inputs, list]):
inputs, update_capacity=self.update_capacity
)

return self._evaluate(inputs)
return self.compute(inputs)

except NotImplementedError as e:
raise e

except Exception as e:
raise ValueError(f"Error in cost calculation: {e}") from e

def _evaluate(self, inputs: Inputs):
def compute(self, inputs: Inputs):
"""
Calculate the cost function value for a given set of parameters.
Calculates the cost function value for a given set of parameters.

This method only computes the cost, without calling the problem.evaluate.
This method must be implemented by subclasses.

Parameters
----------
inputs : Inputs
The parameters for which to evaluate the cost.
The parameters for which to compute the cost.

Returns
-------
Expand All @@ -121,12 +127,17 @@ def _evaluate(self, inputs: Inputs):

def evaluateS1(self, inputs: Union[Inputs, list]):
"""
Call _evaluateS1 for a given set of parameters.
This method calls the forward model via problem.evaluateS1(inputs),
and computes the cost for the given output by calling self.computeS1(inputs).

This method includes the gradient of the cost function computed by
calling self.computeS1(inputs) with the sensitivity information provided by
the forward model via problem.evaluateS1(inputs).

Parameters
----------
inputs : Inputs or array-like
The parameters for which to compute the cost and gradient.
The parameters for which to evaluate the cost and gradient.

Returns
-------
Expand All @@ -147,18 +158,21 @@ def evaluateS1(self, inputs: Union[Inputs, list]):
if self._has_separable_problem:
self.y, self.dy = self.problem.evaluateS1(inputs)

return self._evaluateS1(inputs)
return self.computeS1(inputs)

except NotImplementedError as e:
raise e

except Exception as e:
raise ValueError(f"Error in cost calculation: {e}") from e

def _evaluateS1(self, inputs: Inputs):
def computeS1(self, inputs: Inputs):
"""
Compute the cost and its gradient with respect to the parameters.

This method only computes the cost, without calling the problem.evaluateS1.
This method must be implemented by subclasses.

Parameters
----------
inputs : Inputs
Expand Down
Loading
Loading