Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add utility to configure PDHG for with explicit TGV or TV regularisation #1766

Draft
wants to merge 15 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 7 additions & 3 deletions Wrappers/Python/cil/framework/block.py
Original file line number Diff line number Diff line change
Expand Up @@ -176,7 +176,6 @@ def __init__(self, *args, **kwargs):

def __iter__(self):
'''BlockDataContainer is Iterable'''
self.index=0
return self
def next(self):
'''python2 backwards compatibility'''
Expand Down Expand Up @@ -703,14 +702,19 @@ def __neg__(self):
return -1 * self

def dot(self, other):
#
tmp = [ self.containers[i].dot(other.containers[i]) for i in range(self.shape[0])]
if not isinstance(other, BlockDataContainer):
tmp = [ self.containers[i].dot(other) for i in range(self.shape[0])]
else:
tmp = [ self.containers[i].dot(other.containers[i]) for i in range(self.shape[0])]
return sum(tmp)

def __len__(self):

return self.shape[0]

def max(self):
return max([el.max() for el in self.containers])

@property
def geometry(self):
try:
Expand Down
93 changes: 93 additions & 0 deletions Wrappers/Python/cil/optimisation/utilities/PDHG.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
# set up TGV
lauramurgatroyd marked this conversation as resolved.
Show resolved Hide resolved
from cil.optimisation.functions import MixedL21Norm, BlockFunction, L2NormSquared, ScaledFunction
from cil.optimisation.operators import BlockOperator, IdentityOperator, GradientOperator, \
SymmetrisedGradientOperator, ZeroOperator

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we call this file something other than PDHG.py? e.g. set_up_PDHG.py or something similar?

def setup_explicit_TGV(A, data, alpha, delta=1.0, omega=1):
'''Function to setup LS + TGV problem for use with explicit PDHG

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need a TGV equation in here, defining alpha and beta and omega

Parameters
----------
A : ProjectionOperator
Forward operator.
data : AcquisitionData
alpha : float
Regularisation parameter.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regularisation parameter for which part?

delta : float, default 1.0
The Regularisation parameter for the symmetrised gradient, beta, can be controlled by delta
with beta = delta * alpha.
omega : float, default 1.0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Least squares uses c instead of omega, perhaps we could follow suit?

The constant in front of the data fitting term. Mathematicians like it to be 1/2 but it is 1 by default,
i.e. it is ignored if it is 1.

Returns:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we add a code snippet to explain how to use the returned K and F in PDHG? Also need to explain briefly what we mean by "explicit"

--------
K : BlockOperator
F : BlockFunction
'''

# delta = beta / alpha
# beta = alpha * delta
beta = alpha * delta
f1 = L2NormSquared(b=data)
if omega != 1:
f1 = omega * f1
f2 = MixedL21Norm()
f3 = MixedL21Norm()
F = BlockFunction(f1, f2, f3)

# Define BlockOperator K

# Set up the 3 operator A, Grad and Epsilon
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is Epsilon?

# A, the projection operator is passed by the user
K11 = A
grad = GradientOperator(K11.domain)
K21 = alpha * grad
# https://tomographicimaging.github.io/CIL/nightly/optimisation.html#cil.optimisation.operators.SymmetrisedGradientOperator
K32 = beta * SymmetrisedGradientOperator(K21.range)
# these define the domain and range of the other operators
K12 = ZeroOperator(K32.domain, K11.range)
K22 = -alpha * IdentityOperator(domain_geometry=K21.range, range_geometry=K32.range)
K31 = ZeroOperator(K11.domain, K32.range)

K = BlockOperator(K11, K12, K21, K22, K31, K32, shape=(3,2) )

return K, F


def setup_explicit_TV(A, data, alpha, omega=1):
'''Function to setup LS + TV problem for use with explicit PDHG

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need the objective function written out here, to explain alpa, omega and what LS +TV means.

Parameters
----------
A : ProjectionOperator
Forward operator.
data : AcquisitionData
alpha : float
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we set a default for this? e.g. using Edo's rule of thumb

Regularisation parameter.
omega : float, default 1.0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, should this be c?

The constant in front of the data fitting term. Mathematicians like it to be 1/2 but it is 1 by default,
i.e. it is ignored if it is 1.

Returns:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we add a code snippet to explain how to use the returned K and F in PDHG? Also need to explain briefly what we mean by "explicit"

--------
K : BlockOperator
F : BlockFunction

'''

f1 = L2NormSquared(b=data)
if omega != 1:
f1 = omega * f1
lauramurgatroyd marked this conversation as resolved.
Show resolved Hide resolved
f2 = MixedL21Norm()
F = BlockFunction(f1, f2)

# Define BlockOperator K

K11 = A
grad = GradientOperator(K11.domain)
K21 = alpha * grad

K = BlockOperator(K11, K21)

return K, F
144 changes: 144 additions & 0 deletions Wrappers/Python/test/test_PDHG_utilities.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,144 @@

from cil.optimisation.algorithms import SIRT, GD, ISTA, FISTA
from cil.optimisation.functions import LeastSquares, IndicatorBox
from cil.framework import ImageGeometry, VectorGeometry
from cil.optimisation.operators import IdentityOperator, MatrixOperator

from cil.optimisation.utilities import Sensitivity, AdaptiveSensitivity, Preconditioner
import numpy as np

from testclass import CCPiTestClass
from unittest.mock import MagicMock

from cil.framework import AcquisitionGeometry

from cil.plugins.astra.operators import ProjectionOperator
from cil.optimisation.operators import ScaledOperator
import random

# set up TGV
from cil.optimisation.functions import MixedL21Norm, BlockFunction, L2NormSquared, ScaledFunction
from cil.optimisation.operators import BlockOperator, IdentityOperator, GradientOperator, \
SymmetrisedGradientOperator, ZeroOperator


from cil.optimisation.utilities.PDHG import setup_explicit_TGV, setup_explicit_TV

class TestPDHGUtilities(CCPiTestClass):


def setUp(self):

voxel_num_xy = 255
voxel_num_z = 15

mag = 2
src_to_obj = 50
src_to_det = src_to_obj * mag

pix_size = 0.2
det_pix_x = voxel_num_xy
det_pix_y = voxel_num_z

num_projections = 1000
angles = np.linspace(0, 360, num=num_projections, endpoint=False)

ag2D = AcquisitionGeometry.create_Cone2D([0,-src_to_obj],[0,src_to_det-src_to_obj])\
.set_angles(angles)\
.set_panel(det_pix_x, pix_size)\
.set_labels(['angle','horizontal'])

self.ad2D = ag2D.allocate('random')
ig2D = ag2D.get_ImageGeometry()

ag3D = AcquisitionGeometry.create_Cone3D([0,-src_to_obj,0],[0,src_to_det-src_to_obj,0])\
.set_angles(angles)\
.set_panel((det_pix_x,det_pix_y), (pix_size,pix_size))\
.set_labels(['angle','vertical','horizontal'])

ig3D = ag3D.get_ImageGeometry()

self.ad3D = ag3D.allocate('random')
self.ad3D.reorder('astra')
ig3D = ag3D.get_ImageGeometry()


self.A_2D = ProjectionOperator(ig2D, ag2D, device = "gpu")
self.A_3D = ProjectionOperator(ig3D, self.ad3D.geometry, device = "gpu")



def test_setup_explicit_TV(self):

alphas=[1, random.randint(2,10)]
alpha = alphas[1]

omegas = [1, random.randint(2,10)]

for omega in omegas:
for alpha in alphas:

K_2D, F_2D = setup_explicit_TV(self.A_2D, self.ad2D, alpha, omega)
K_3D, F_3D = setup_explicit_TV(self.A_3D, self.ad3D, alpha, omega)

case_2D = {'A': self.A_2D, 'K': K_2D, 'F': F_2D, 'ad': self.ad2D}
case_3D = {'A': self.A_3D, 'K': K_3D, 'F': F_3D, 'ad': self.ad3D}

cases = [case_2D, case_3D]

case_names = ['2D', '3D']

for i, case in enumerate(cases):

with self.subTest(case_names[i]):
A = case['A']
K = case['K']
F = case['F']
ad = case['ad']

# Testing K --------------------------
np.testing.assert_equal(type(K), BlockOperator)
np.testing.assert_equal(K.shape, (2,1))

# K[0]
np.testing.assert_equal(K[0], A)

# K [1]

# We expect the second part of the K operator to be Grad multiplied by alpha
np.testing.assert_equal(type(K[1]), ScaledOperator)
np.testing.assert_equal(K[1].scalar, alpha)
np.testing.assert_equal(type(K[1].operator), GradientOperator)
ig = ad.geometry.get_ImageGeometry()
expected_grad = alpha*GradientOperator(ig)
np.testing.assert_allclose(expected_grad.direct(ad)[1].as_array(), expected_grad.direct(ad)[1].as_array())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is that comparing the same thing?

np.testing.assert_allclose(expected_grad.direct(ad)[1].as_array(), K[1].direct(ad)[1].as_array(), 10**(-4))
np.testing.assert_allclose(expected_grad.direct(ad)[0].as_array(), K[1].direct(ad)[0].as_array(), 10**(-4))
Comment on lines +115 to +116
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the expected_grad sould be operator on an image geometry not on acquisition data? Thus you should not be using ad



# Testing F --------------------------------------
np.testing.assert_equal(type(F), BlockFunction)
np.testing.assert_equal(F.length, 2)

# F[0]

if omega == 1:
np.testing.assert_equal(L2NormSquared, type(F[0]))
function = F[0]
else:
np.testing.assert_equal(ScaledFunction, type(F[0]))
np.testing.assert_equal(F[0].scalar, omega)
function = F[0].function

np.testing.assert_equal(type(function), L2NormSquared)
np.testing.assert_array_equal(function.b.as_array(), ad.as_array())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would also be tempted to evaluate your functions on a small image, so reduce the size of the geometry to save computational cost, create a random test_data and then do
self.assertAlmostEqual(function(test_data), omega*LeastSquares(b=ad)(test_data))



# F[1]

np.testing.assert_equal(MixedL21Norm, type(F[1]))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similarly, can you test this on an object?






Loading