Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce SPORES in v0.7.0 as a generalisable mode #716

Open
wants to merge 22 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 21 commits
Commits
Show all changes
22 commits
Select commit Hold shift + click to select a range
325ab50
Add `broadcast_param_data` config option and default it to False.
brynpickering Nov 19, 2024
6180cf1
Add spores scenarios to example and add notebook
FLomb Nov 20, 2024
1d22193
Update spores_run.py
FLomb Nov 20, 2024
c45b082
Update model.yaml
FLomb Nov 20, 2024
872ca93
Update spores_run.py
FLomb Nov 20, 2024
a2b4965
Merge branch 'main' into feature-spores-generalised
brynpickering Dec 23, 2024
379dac1
Merge branch 'main' into feature-spores-generalised
brynpickering Dec 23, 2024
8e534e7
Working SPORES model
brynpickering Dec 24, 2024
42925b3
Update method to rely less on user input
brynpickering Dec 24, 2024
e6c9b56
Add tests; update example notebook
brynpickering Dec 24, 2024
d61e65f
Remove additional math
brynpickering Dec 24, 2024
3e97ec8
Update latex math test
brynpickering Dec 24, 2024
9c3d467
Add tests; minor fixes & renaming
brynpickering Feb 10, 2025
ba5638d
H -> h
brynpickering Feb 17, 2025
45d2b7b
Merge branch 'main' into feature-spores-generalised
brynpickering Feb 17, 2025
f0feabc
Post merge fixes
brynpickering Feb 17, 2025
77808df
Changes in response to review
brynpickering Feb 21, 2025
cbfaedc
Rename spores score threshold factor
brynpickering Feb 25, 2025
86af532
Merge branch 'main' into feature-spores-generalised
brynpickering Feb 25, 2025
3611d75
Update SPORES objective; update math docs
brynpickering Feb 25, 2025
2ef3380
Fix `math math`
brynpickering Feb 25, 2025
9818b70
re-introduce unmet demand in spores objective; remove hardcoded spore…
brynpickering Feb 25, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 4 additions & 15 deletions docs/advanced/mode.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,9 +65,6 @@ For this reason, `horizon` must always be equal to or larger than `window`.

## SPORES mode

!!! warning
SPORES mode has not yet been re-implemented in Calliope v0.7.

`SPORES` refers to Spatially-explicit Practically Optimal REsultS.
This run mode allows a user to generate any number of alternative results which are within a certain range of the optimal cost.
It follows on from previous work in the field of `modelling to generate alternatives` (MGA), with a particular emphasis on alternatives that vary maximally in the spatial dimension.
Expand All @@ -77,18 +74,10 @@ As an example, if you wanted to generate 10 SPORES, all of which are within 10%

```yaml
config.build.mode: spores
config.solve:
# The number of SPORES to generate:
spores_number: 10
# The cost class to optimise against when generating SPORES:
spores_score_cost_class: spores_score
# The initial system cost to limit the SPORES to fit within:
spores_cost_max: .inf
# The cost class to constrain to be less than or equal to `spores_cost_max`:
spores_slack_cost_group: monetary
parameters:
# The fraction above the cost-optimal cost to set the maximum cost during SPORES:
slack: 0.1
# The number of SPORES to generate:
config.solve.spores.number: 10:
# The fraction above the cost-optimal cost to set the maximum cost during SPORES:
parameters.spores_slack: 0.1
```

You will now also need a `spores_score` cost class in your model.
Expand Down
1 change: 0 additions & 1 deletion docs/examples/loading_tabular_data.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
# ---
# jupyter:
# jupytext:
# custom_cell_magics: kql
# text_representation:
# extension: .py
# format_name: percent
Expand Down
190 changes: 190 additions & 0 deletions docs/examples/modes.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,190 @@
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: percent
# format_version: '1.3'
# jupytext_version: 1.16.4
# kernelspec:
# display_name: calliope_docs_build
# language: python
# name: calliope_docs_build
# ---

# %% [markdown]
# # Running models in different modes
#
# Models can be built and solved in different modes:

# - `plan` mode.
# In `plan` mode, the user defines upper and lower boundaries for technology capacities and the model decides on an optimal system configuration.
# In this configuration, the total cost of investing in technologies and then using them to meet demand in every _timestep_ (e.g., every hour) is as low as possible.
# - `operate` mode.
# In `operate` mode, all capacity constraints are fixed and the system is operated with a receding horizon control algorithm.
# This is sometimes known as a `dispatch` model - we're only concerned with the _dispatch_ of technologies whose capacities are already fixed.
# Optimisation is limited to a time horizon which
# - `spores` mode.
# `SPORES` refers to Spatially-explicit Practically Optimal REsultS.
# This run mode allows a user to generate any number of alternative results which are within a certain range of the optimal cost.

# In this notebook we will run the Calliope national scale example model in these three modes.

# More detail on these modes is given in the [_advanced_ section of the Calliope documentation](https://calliope.readthedocs.io/en/latest/advanced/mode/).

# %%

import plotly.express as px
import plotly.graph_objects as go
import xarray as xr

import calliope

# We update logging to show a bit more information but to hide the solver output, which can be long.
calliope.set_log_verbosity("INFO", include_solver_output=False)

# %% [markdown]
# ## Running in `plan` mode.

# %%
# We subset to the same time range as operate mode
model_plan = calliope.examples.national_scale(time_subset=["2005-01-01", "2005-01-10"])
model_plan.build()
model_plan.solve()

# %% [markdown]
# ## Running in `operate` mode.

# %%
model_operate = calliope.examples.national_scale(scenario="operate")
model_operate.build()
model_operate.solve()

# %% [markdown]
# Note how we have capacity variables as parameters in the inputs and only dispatch variables in the results

# %%
model_operate.inputs[["flow_cap", "storage_cap", "area_use"]]

# %%
model_operate.results

# %% [markdown]
# ## Running in `spores` mode.

# %%
# We subset to the same time range as operate/plan mode
model_spores = calliope.examples.national_scale(
scenario="spores", time_subset=["2005-01-01", "2005-01-10"]
)
model_spores.build()
model_spores.solve()

# %% [markdown]
# Note how we have a new `spores` dimension in our results.

# %%
model_spores.results

# %% [markdown]
# We can track the SPORES scores used between iterations using the `spores_score_cumulative` result.
# This scoring mechanism is based on increasing the score of any technology-node combination where the

# %%
# We do some prettification of the outputs
model_spores.results.spores_score_cumulative.to_series().where(
lambda x: x > 0
).dropna().unstack("spores")

# %% [markdown]
# ## Visualising results
#
# We can use [plotly](https://plotly.com/) to quickly examine our results.
# These are just some examples of how to visualise Calliope data.

# %%
# We set the color mapping to use in all our plots by extracting the colors defined in the technology definitions of our model.
# We also create some reusable plotting functions.
colors = model_plan.inputs.color.to_series().to_dict()


def plot_flows(results: xr.Dataset) -> go.Figure:
df_electricity = (
(results.flow_out.fillna(0) - results.flow_in.fillna(0))
.sel(carriers="power")
.sum("nodes")
.to_series()
.where(lambda x: x != 0)
.dropna()
.to_frame("Flow in/out (kWh)")
.reset_index()
)
df_electricity_demand = df_electricity[df_electricity.techs == "demand_power"]
df_electricity_other = df_electricity[df_electricity.techs != "demand_power"]

fig = px.bar(
df_electricity_other,
x="timesteps",
y="Flow in/out (kWh)",
color="techs",
color_discrete_map=colors,
)
fig.add_scatter(
x=df_electricity_demand.timesteps,
y=-1 * df_electricity_demand["Flow in/out (kWh)"],
marker_color="black",
name="demand",
)
return fig


def plot_capacity(results: xr.Dataset, **plotly_kwargs) -> go.Figure:
df_capacity = (
results.flow_cap.where(results.techs != "demand_power")
.sel(carriers="power")
.to_series()
.where(lambda x: x != 0)
.dropna()
.to_frame("Flow capacity (kW)")
.reset_index()
)

fig = px.bar(
df_capacity,
x="nodes",
y="Flow capacity (kW)",
color="techs",
color_discrete_map=colors,
**plotly_kwargs,
)
return fig


# %% [markdown]
# ## `plan` vs `operate`
# Here, we compare flows over the 10 days.
# Note how flows do not match as the rolling horizon makes it difficult to make the correct storage charge/discharge decisions.

# %%
fig_flows_plan = plot_flows(
model_plan.results.sel(timesteps=model_operate.results.timesteps)
)
fig_flows_plan.update_layout(title="Plan mode flows")


# %%
fig_flows_operate = plot_flows(model_operate.results)
fig_flows_operate.update_layout(title="Operate mode flows")

# %% [markdown]
# ## `plan` vs `spores`
# Here, we compare installed capacities between the baseline run (== `plan` mode) and the SPORES.
# Note how the baseline SPORE is the same as `plan` mode and then results deviate considerably.

# %%
fig_flows_plan = plot_capacity(model_plan.results)
fig_flows_plan.update_layout(title="Plan mode capacities")

# %%
fig_flows_spores = plot_capacity(model_spores.results, facet_col="spores")
fig_flows_spores.update_layout(title="SPORES mode capacities")
2 changes: 1 addition & 1 deletion docs/examples/national_scale/notebook.py
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@

# %% [markdown]
# #### Plotting flows
# We do this by combinging in- and out-flows and separating demand from other technologies.
# We do this by combining in- and out-flows and separating demand from other technologies.
# First, we look at the aggregated result across all nodes, then we look at each node separately.

# %%
Expand Down
11 changes: 9 additions & 2 deletions docs/hooks/dummy_model/model.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,19 @@ overrides:
time_cluster: cluster_days.csv
config.build:
add_math: ["storage_inter_cluster"]
spores:
config:
init.name: SPORES solve mode
build.mode: spores
solve.spores.number: 2
parameters:
spores_slack: 0.1

config.init.name: base

nodes:
A.techs: {demand_tech, conversion_tech, supply_tech, storage_tech}
B.techs: {demand_tech, conversion_tech, supply_tech, storage_tech}
A.techs: { demand_tech, conversion_tech, supply_tech, storage_tech }
B.techs: { demand_tech, conversion_tech, supply_tech, storage_tech }

techs:
tech_transmission:
Expand Down
2 changes: 1 addition & 1 deletion docs/hooks/generate_math_docs.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ def on_files(files: list, config: dict, **kwargs):
f"{override}.yaml",
textwrap.dedent(
f"""
Pre-defined additional math to apply {custom_documentation.name} math on top of the [base mathematical formulation][base-math].
Pre-defined additional math to apply {custom_documentation.name} __on top of__ the [base mathematical formulation][base-math].
This math is _only_ applied if referenced in the `config.init.add_math` list as `{override}`.
"""
),
Expand Down
1 change: 1 addition & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -132,6 +132,7 @@ nav:
- examples/milp/index.md
- examples/milp/notebook.py
- examples/loading_tabular_data.py
- examples/modes.py
- examples/piecewise_constraints.py
- examples/calliope_model_object.py
- examples/calliope_logging.py
Expand Down
30 changes: 14 additions & 16 deletions src/calliope/backend/backend_model.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,7 @@ class BackendModelGenerator(ABC):
"default",
"type",
"title",
"sense",
"math_repr",
"original_dtype",
]
Expand All @@ -65,6 +66,8 @@ class BackendModelGenerator(ABC):
_PARAM_DESCRIPTIONS = extract_from_schema(MODEL_SCHEMA, "description")
_PARAM_UNITS = extract_from_schema(MODEL_SCHEMA, "x-unit")
_PARAM_TYPE = extract_from_schema(MODEL_SCHEMA, "x-type")
objective: str
"""Optimisation problem objective name."""

def __init__(
self, inputs: xr.Dataset, math: CalliopeMath, build_config: config_schema.Build
Expand Down Expand Up @@ -170,6 +173,14 @@ def add_objective(
objective_dict (parsing.UnparsedObjective): Unparsed objective configuration dictionary.
"""

@abstractmethod
def set_objective(self, name: str) -> None:
"""Set a built objective to be the optimisation objective.

Args:
name (str): name of the objective.
"""

def log(
self,
component_type: ALL_COMPONENTS_T,
Expand Down Expand Up @@ -928,13 +939,7 @@ def has_integer_or_binary_variables(self) -> bool:

@abstractmethod
def _solve(
self,
solver: str,
solver_io: str | None = None,
solver_options: dict | None = None,
save_logs: str | None = None,
warmstart: bool = False,
**solve_config,
self, solve_config: config_schema.Solve, warmstart: bool = False
) -> xr.Dataset:
"""Optimise built model.

Expand All @@ -943,17 +948,10 @@ def _solve(
values at optimality.

Args:
solver (str): Name of solver to optimise with.
solver_io (str | None, optional): If chosen solver has a python interface, set to "python" for potential
performance gains, otherwise should be left as None. Defaults to None.
solver_options (dict | None, optional): Solver options/parameters to pass directly to solver.
See solver documentation for available parameters that can be influenced. Defaults to None.
save_logs (str | None, optional): If given, solver logs and built LP file will be saved to this filepath.
Defaults to None.
solve_config: (config_schema.Solve): Calliope Solve configuration object.
warmstart (bool, optional): If True, and the chosen solver is capable of implementing it, an existing
optimal solution will be used to warmstart the next solve run.
Defaults to False.
**solve_config: solve configuration overrides.

Returns:
xr.Dataset: Dataset of decision variable values if the solution was optimal/feasible,
Expand Down Expand Up @@ -1144,7 +1142,7 @@ def track_constraints(self, constraints_to_track: list):
valid_constraints = shadow_prices.intersection(self.available_constraints)
if invalid_constraints:
model_warn(
f"Invalid constraints {invalid_constraints} in `config.solve.shadow_prices`. "
f"Invalid constraints {invalid_constraints} in `config_schema.solve.shadow_prices`. "
"Their shadow prices will not be tracked."
)
# Only actually activate shadow price tracking if at least one valid
Expand Down
Loading