Skip to content

Commit

Permalink
Feature/demos (#150)
Browse files Browse the repository at this point in the history
* added history matching demo code

* tweaked history matching demo to be more consistent

* added demo for dimension reduction

* updated docs to include new demos

* incremented version number for merge
  • Loading branch information
edaub authored Jan 12, 2021
1 parent 88d8bab commit 2e0200e
Show file tree
Hide file tree
Showing 6 changed files with 209 additions and 1 deletion.
12 changes: 12 additions & 0 deletions docs/demos/historymatch_demos.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
.. _historymatch_demos:

History Matching Demos
========================================================

This demo shows how to carry out History Matching using a GP emulator.
The two examples show how a fit GP can be passed directly to the
`HistoryMatching` class, or how the predictions object can be passed
instead. The demo also shows how other options can be set.

.. literalinclude::
../../mogp_emulator/demos/historymatch_demos.py
13 changes: 13 additions & 0 deletions docs/demos/kdr_demos.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
.. _kdr_demos:

Kernel Dimension Reduction (KDR) Demos
========================================================

This demo shows how to use the ``gKDR`` class to perform dimension reduction
on the inputs to an emulator. The examples show how dimension reduction
with a known number of dimensions can be fit, as well as how the class
can use cross validation to infer a best number of dimensions from the
data itself.

.. literalinclude::
../../mogp_emulator/demos/kdr_demos.py
2 changes: 2 additions & 0 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,8 @@ details, and some included benchmarks.

demos/gp_demos
demos/mice_demos
demos/historymatch_demos
demos/kdr_demos
demos/gp_demoR
demos/excalibur_workshop_demo

Expand Down
100 changes: 100 additions & 0 deletions mogp_emulator/demos/historymatch_demos.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
import mogp_emulator
import numpy as np

# simple History Matching example

# simulator function -- needs to take a single input and output a single number

def f(x):
return np.exp(-np.sum((x-2.)**2, axis = -1)/2.)

# Experimental design -- requires a list of parameter bounds if you would like to use
# uniform distributions. If you want to use different distributions, you
# can use any of the standard distributions available in scipy to create
# the appropriate ppf function (the inverse of the cumulative distribution).
# Internally, the code creates the design on the unit hypercube and then uses
# the distribution to map from [0,1] to the real parameter space.

ed = mogp_emulator.LatinHypercubeDesign([(0., 5.), (0., 5.)])

# sample space, use many samples to ensure we get a good emulator

inputs = ed.sample(50)

# run simulation

targets = np.array([f(p) for p in inputs])

# Example observational data is a single number plus an uncertainty.
# In this case we use a number close to 1, which should have a corresponding
# input close to (2,2) after performing history matching

###################################################################################

# First step -- fit GP using MLE and Squared Exponential Kernel

gp = mogp_emulator.GaussianProcess(inputs, targets)

gp = mogp_emulator.fit_GP_MAP(gp)

###################################################################################

# First Example: Use HistoryMatching class to make the predictions

print("Example 1: Make predictions with HistoryMatching object")

# create HistoryMatching object, set threshold to be low to make printed output
# easier to read

threshold = 0.01
hm = mogp_emulator.HistoryMatching(threshold=threshold)

# For this example, we set the observations, GP, and the coordinates
# observations is either a single float (the value) or two floats (value and
# uncertainty as a variance)

obs = [1., 0.08]
hm.set_obs(obs)
hm.set_gp(gp)

# set coordinates of GP object where we will test if the points can plausbily
# explain the data here we use our existing experimental design, but sample
# 10000 points

coords = ed.sample(10000)
hm.set_coords(coords)

# calculate implausibility metric

implaus = hm.get_implausibility()

# print points that we have not ruled out yet:

for p, im in zip(coords[hm.get_NROY()], implaus[hm.get_NROY()]):
print("Sample point: {} Implausibility: {}".format(p, im))

###################################################################################

# Second Example: Pass external GP predictions and add model discrepancy

print("Example 2: External Predictions and Model Discrepancy")

# use gp to make predictions on 10000 new points externally

coords = ed.sample(10000)

expectations = gp.predict(coords)

# now create HistoryMatching object with these new parameters

hm_extern = mogp_emulator.HistoryMatching(obs=obs, expectations=expectations,
threshold=threshold)

# calculate implausibility, adding a model discrepancy (as a variance)

implaus_extern = hm_extern.get_implausibility(0.1)

# print points that we have not ruled out yet:

for p, im in zip(coords[hm_extern.get_NROY()], implaus_extern[hm_extern.get_NROY()]):
print("Sample point: {} Implausibility: {}".format(p, im))
81 changes: 81 additions & 0 deletions mogp_emulator/demos/kdr_demos.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
import mogp_emulator
import numpy as np

# simple Dimension Reduction examples

# simulator function -- returns a single "important" dimension from
# at least 4 inputs

def f(x):
return (x[0]-x[1]+2.*x[3])/3.

# Experimental design -- create a design with 5 input parameters
# all uniformly distributed over [0,1].

ed = mogp_emulator.LatinHypercubeDesign(5)

# sample space

inputs = ed.sample(100)

# run simulation

targets = np.array([f(p) for p in inputs])

###################################################################################

# First example -- dimension reduction given a specified number of dimensions
# (note that in real life, we do not know that the underlying simulation only
# has a single dimension)

print("Example 1: Basic Dimension Reduction")

# create DR object with a single reduced dimension (K = 1)

dr = mogp_emulator.gKDR(inputs, targets, K=1)

# use it to create GP

gp = mogp_emulator.fit_GP_MAP(dr(inputs), targets)

# create 5 target points to predict

predict_points = ed.sample(5)
predict_actual = np.array([f(p) for p in predict_points])

means = gp(dr(predict_points))

for pp, m, a in zip(predict_points, means, predict_actual):
print("Target point: {} Predicted mean: {} Actual mean: {}".format(pp, m, a))

###################################################################################

# Second Example: Estimate dimensions from data

print("Example 2: Estimate the number of dimensions from the data")

# Use the tune_parameters method to use cross validation to create DR object
# Note this is more realistic than the above as it does not know the
# number of dimensions in advance

dr_tuned, loss = mogp_emulator.gKDR.tune_parameters(inputs, targets,
mogp_emulator.fit_GP_MAP,
cXs=[3.], cYs=[3.])

# Get number of inferred dimensions (usually gives 2)

print("Number of inferred dimensions is {}".format(dr_tuned.K))

# use object to create GP

gp_tuned = mogp_emulator.fit_GP_MAP(dr_tuned(inputs), targets)

# create 10 target points to predict

predict_points = ed.sample(5)
predict_actual = np.array([f(p) for p in predict_points])

means = gp_tuned(dr_tuned(predict_points))

for pp, m, a in zip(predict_points, means, predict_actual):
print("Target point: {} Predicted mean: {} Actual mean: {}".format(pp, m, a))
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
MAJOR = 0
MINOR = 5
MICRO = 0
PRERELEASE = 0
PRERELEASE = 1
ISRELEASED = False
version = "{}.{}.{}".format(MAJOR, MINOR, MICRO)

Expand Down

0 comments on commit 2e0200e

Please sign in to comment.