You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In #1586 , I described the new PGA class which implements the following iteration $$x_{k+1} = prox_{\gamma_{k}g}(x_{k} - \gamma_{k}D(x_{k})\nabla f(x_{k}))$$
where $D(x_{k})$ is a (diagonal) Preconditioner. For this, I implemented thee base class called Preconditioner.
It is an ABC class with an abstractmethod called update. There are two classes that inherit from this class 1) Sensitivity, 2) AdaptiveSensitivity
This computes that standard $D(x_{k}) = \frac{1}{A^{T}1}$. So it is constant for all $x_{k}$ and computed when the class is instantiated. Then, the update method will change the output the f.gradient. It works for both CIL and SIRF.
This is $D(x_{k}) = \frac{x_{k} + \delta}{A^{T}1}$, where $\delta>=0$. If $\delta=0$, we have the standard preconditioner used in MLEM. The $\delta>0$ was recently used here for stochastic optimisation.
In the AdaptiveSensitivity there are freezing_point and iterations arguments, which are used for specific cases of this preconditioner:
"We investigated the impact of different preconditioner inputs on the convergence of SAGA and SVRG. These included
D(xk), freezing the preconditioner at D(xOSEM), and freezing the preconditioner after 5 and 10 epochs, D(x5M) and D(x10M) respectively."
Since the Preconditioner acts on self, i.e., the actual algorithm, our users can define their own preconditioner.
Example: SIRT is equivalent to a Preconditioned Projected Gradient Descent on Weighted LS.
Example: MLEM for PET using a SIRF function and a CIL algorithm
Note In MLEM for PET, the SIRF Objective class make_Poisson_log_likelihood(d) maximizes the negative log, so we need to take step_size = -1 to simulate a gradient ascent. For CT (CIL) and using the KullbackLeibler function, we can keep step_size = 1 and have MLEM for CT application for low counts.
The text was updated successfully, but these errors were encountered:
In #1586 , I described the new PGA class which implements the following iteration
$$x_{k+1} = prox_{\gamma_{k}g}(x_{k} - \gamma_{k}D(x_{k})\nabla f(x_{k}))$$ $D(x_{k})$ is a (diagonal) Preconditioner. For this, I implemented thee base class called Preconditioner.
where
It is an
ABC
class with anabstractmethod
calledupdate
. There are two classes that inherit from this class 1)Sensitivity
, 2)AdaptiveSensitivity
update
method will change the output thef.gradient
. It works for both CIL and SIRF.In the
AdaptiveSensitivity
there arefreezing_point
anditerations
arguments, which are used for specific cases of this preconditioner:"We investigated the impact of different preconditioner inputs on the convergence of SAGA and SVRG. These included
D(xk), freezing the preconditioner at D(xOSEM), and freezing the preconditioner after 5 and 10 epochs, D(x5M) and D(x10M) respectively."
Since the Preconditioner acts on
self
, i.e., the actual algorithm, our users can define their own preconditioner.Example: SIRT is equivalent to a Preconditioned Projected Gradient Descent on Weighted LS.
Example: MLEM for PET using a SIRF function and a CIL algorithm
Note In MLEM for PET, the SIRF Objective class
make_Poisson_log_likelihood(d)
maximizes the negative log, so we need to takestep_size = -1
to simulate agradient ascent
. For CT (CIL) and using theKullbackLeibler
function, we can keepstep_size = 1
and have MLEM for CT application for low counts.The text was updated successfully, but these errors were encountered: