Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix minor stuff #920

Merged
merged 7 commits into from
Nov 11, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions gtsam/inference/EliminateableFactorGraph-inst.h
Original file line number Diff line number Diff line change
Expand Up @@ -36,17 +36,17 @@ namespace gtsam {
// no Ordering is provided. When removing optional from VariableIndex, create VariableIndex
// before creating ordering.
VariableIndex computedVariableIndex(asDerived());
return eliminateSequential(function, computedVariableIndex, orderingType);
return eliminateSequential(orderingType, function, computedVariableIndex);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the PR, Varun.

Why change the order of variables here? breaks backwards compatibility perhaps?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't because this is a function call within another function aka the public API is still the same.

Fan and I discovered that the original function call was just calling this function, so instead of the indirect call, we save on the call stack by making this call directly.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another way to think of this is that we're just calling another version of an overloaded function, but with the added benefit of reducing the call stack, thus minor gains in efficiency plus easier to read the stack trace during debugging.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool, makes sense. Could you add a link to the old wrapper function and the new function we're calling directly? Which file are they in?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the original one which is deprecated (though not properly).

boost::shared_ptr<BayesNetType> eliminateSequential(

This is replaced by this

boost::shared_ptr<BayesNetType> eliminateSequential(

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. So now we are using overloaded versions with boost "none" defaults for missing args. I wonder if we should make a TODO or note to ** fully ** remove the deprecated one as well

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, that's issue #889

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't do TODOs here, we make issues haha

}
else {
// Compute an ordering and call this function again. We are guaranteed to have a
// VariableIndex already here because we computed one if needed in the previous 'if' block.
if (orderingType == Ordering::METIS) {
Ordering computedOrdering = Ordering::Metis(asDerived());
return eliminateSequential(computedOrdering, function, variableIndex, orderingType);
return eliminateSequential(computedOrdering, function, variableIndex);
} else {
Ordering computedOrdering = Ordering::Colamd(*variableIndex);
return eliminateSequential(computedOrdering, function, variableIndex, orderingType);
return eliminateSequential(computedOrdering, function, variableIndex);
}
}
}
Expand Down
5 changes: 3 additions & 2 deletions gtsam/nonlinear/GncOptimizer.h
Original file line number Diff line number Diff line change
Expand Up @@ -142,8 +142,9 @@ class GTSAM_EXPORT GncOptimizer {
* provides an extra interface for the user to initialize the weightst
* */
void setWeights(const Vector w) {
if(w.size() != nfg_.size()){
throw std::runtime_error("GncOptimizer::setWeights: the number of specified weights"
if (size_t(w.size()) != nfg_.size()) {
throw std::runtime_error(
"GncOptimizer::setWeights: the number of specified weights"
" does not match the size of the factor graph.");
}
weights_ = w;
Expand Down
11 changes: 5 additions & 6 deletions gtsam/nonlinear/factorTesting.h
Original file line number Diff line number Diff line change
Expand Up @@ -35,8 +35,7 @@ namespace gtsam {
* zero errors anyway. However, it means that below will only be exact for the correct measurement.
*/
JacobianFactor linearizeNumerically(const NoiseModelFactor& factor,
const Values& values, double delta = 1e-5) {

const Values& values, double delta = 1e-5) {
// We will fill a vector of key/Jacobians pairs (a map would sort)
std::vector<std::pair<Key, Matrix> > jacobians;

Expand All @@ -46,24 +45,24 @@ JacobianFactor linearizeNumerically(const NoiseModelFactor& factor,

// Loop over all variables
const double one_over_2delta = 1.0 / (2.0 * delta);
for(Key key: factor) {
for (Key key : factor) {
// Compute central differences using the values struct.
VectorValues dX = values.zeroVectors();
const size_t cols = dX.dim(key);
Matrix J = Matrix::Zero(rows, cols);
for (size_t col = 0; col < cols; ++col) {
Eigen::VectorXd dx = Eigen::VectorXd::Zero(cols);
dx[col] = delta;
dx(col) = delta;
dX[key] = dx;
Values eval_values = values.retract(dX);
const Eigen::VectorXd left = factor.whitenedError(eval_values);
dx[col] = -delta;
dx(col) = -delta;
varunagrawal marked this conversation as resolved.
Show resolved Hide resolved
dX[key] = dx;
eval_values = values.retract(dX);
const Eigen::VectorXd right = factor.whitenedError(eval_values);
J.col(col) = (left - right) * one_over_2delta;
}
jacobians.push_back(std::make_pair(key,J));
jacobians.push_back(std::make_pair(key, J));
}

// Next step...return JacobianFactor
Expand Down
4 changes: 4 additions & 0 deletions gtsam/nonlinear/nonlinear.i
Original file line number Diff line number Diff line change
Expand Up @@ -115,6 +115,10 @@ class Ordering {
Ordering();
Ordering(const gtsam::Ordering& other);

template <FACTOR_GRAPH = {gtsam::NonlinearFactorGraph,
gtsam::GaussianFactorGraph}>
static gtsam::Ordering Colamd(const FACTOR_GRAPH& graph);

// Testable
void print(string s = "", const gtsam::KeyFormatter& keyFormatter =
gtsam::DefaultKeyFormatter) const;
Expand Down
110 changes: 60 additions & 50 deletions python/gtsam/examples/SFMExample_bal.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,112 +7,122 @@
See LICENSE for the license information

Solve a structure-from-motion problem from a "Bundle Adjustment in the Large" file
Author: Frank Dellaert (Python: Akshay Krishnan, John Lambert)
Author: Frank Dellaert (Python: Akshay Krishnan, John Lambert, Varun Agrawal)
"""

import argparse
import logging
import sys

import matplotlib.pyplot as plt
import numpy as np

import gtsam
from gtsam import (
GeneralSFMFactorCal3Bundler,
PinholeCameraCal3Bundler,
PriorFactorPinholeCameraCal3Bundler,
readBal,
symbol_shorthand
)
from gtsam import (GeneralSFMFactorCal3Bundler,
PriorFactorPinholeCameraCal3Bundler, PriorFactorPoint3)
from gtsam.symbol_shorthand import C, P # type: ignore
from gtsam.utils import plot # type: ignore
from matplotlib import pyplot as plt

logging.basicConfig(stream=sys.stdout, level=logging.INFO)

DEFAULT_BAL_DATASET = "dubrovnik-3-7-pre"


def plot_scene(scene_data: gtsam.SfmData, result: gtsam.Values) -> None:
"""Plot the SFM results."""
plot_vals = gtsam.Values()
for cam_idx in range(scene_data.number_cameras()):
plot_vals.insert(C(cam_idx),
result.atPinholeCameraCal3Bundler(C(cam_idx)).pose())
for j in range(scene_data.number_tracks()):
plot_vals.insert(P(j), result.atPoint3(P(j)))

C = symbol_shorthand.C
P = symbol_shorthand.P
plot.plot_3d_points(0, plot_vals, linespec="g.")
plot.plot_trajectory(0, plot_vals, title="SFM results")

plt.show()

logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)

def run(args):
def run(args: argparse.Namespace) -> None:
""" Run LM optimization with BAL input data and report resulting error """
input_file = gtsam.findExampleDataFile(args.input_file)
input_file = args.input_file

# Load the SfM data from file
scene_data = readBal(input_file)
logging.info(f"read {scene_data.number_tracks()} tracks on {scene_data.number_cameras()} cameras\n")
scene_data = gtsam.readBal(input_file)
logging.info("read %d tracks on %d cameras\n", scene_data.number_tracks(),
scene_data.number_cameras())

# Create a factor graph
graph = gtsam.NonlinearFactorGraph()

# We share *one* noiseModel between all projection factors
noise = gtsam.noiseModel.Isotropic.Sigma(2, 1.0) # one pixel in u and v
noise = gtsam.noiseModel.Isotropic.Sigma(2, 1.0) # one pixel in u and v

# Add measurements to the factor graph
j = 0
for t_idx in range(scene_data.number_tracks()):
track = scene_data.track(t_idx) # SfmTrack
for j in range(scene_data.number_tracks()):
track = scene_data.track(j) # SfmTrack
# retrieve the SfmMeasurement objects
for m_idx in range(track.number_measurements()):
# i represents the camera index, and uv is the 2d measurement
i, uv = track.measurement(m_idx)
# note use of shorthand symbols C and P
graph.add(GeneralSFMFactorCal3Bundler(uv, noise, C(i), P(j)))
j += 1

# Add a prior on pose x1. This indirectly specifies where the origin is.
graph.push_back(
gtsam.PriorFactorPinholeCameraCal3Bundler(
C(0), scene_data.camera(0), gtsam.noiseModel.Isotropic.Sigma(9, 0.1)
)
)
PriorFactorPinholeCameraCal3Bundler(
C(0), scene_data.camera(0),
gtsam.noiseModel.Isotropic.Sigma(9, 0.1)))
# Also add a prior on the position of the first landmark to fix the scale
graph.push_back(
gtsam.PriorFactorPoint3(
P(0), scene_data.track(0).point3(), gtsam.noiseModel.Isotropic.Sigma(3, 0.1)
)
)
PriorFactorPoint3(P(0),
scene_data.track(0).point3(),
gtsam.noiseModel.Isotropic.Sigma(3, 0.1)))

# Create initial estimate
initial = gtsam.Values()

i = 0
# add each PinholeCameraCal3Bundler
for cam_idx in range(scene_data.number_cameras()):
camera = scene_data.camera(cam_idx)
initial.insert(C(i), camera)
i += 1

j = 0
# add each SfmTrack
for t_idx in range(scene_data.number_tracks()):
track = scene_data.track(t_idx)
for j in range(scene_data.number_tracks()):
track = scene_data.track(j)
initial.insert(P(j), track.point3())
j += 1

# Optimize the graph and print results
try:
params = gtsam.LevenbergMarquardtParams()
params.setVerbosityLM("ERROR")
lm = gtsam.LevenbergMarquardtOptimizer(graph, initial, params)
result = lm.optimize()
except Exception as e:
except RuntimeError:
logging.exception("LM Optimization failed")
return

# Error drops from ~2764.22 to ~0.046
logging.info(f"final error: {graph.error(result)}")
logging.info("initial error: %f", graph.error(initial))
logging.info("final error: %f", graph.error(result))

plot_scene(scene_data, result)

if __name__ == "__main__":

def main() -> None:
"""Main runner."""
parser = argparse.ArgumentParser()
parser.add_argument(
'-i',
'--input_file',
type=str,
default="dubrovnik-3-7-pre",
help='Read SFM data from the specified BAL file'
'The data format is described here: https://grail.cs.washington.edu/projects/bal/.'
'BAL files contain (nrPoses, nrPoints, nrObservations), followed by (i,j,u,v) tuples, '
'then (wx,wy,wz,tx,ty,tz,f,k1,k1) as Bundler camera calibrations w/ Rodrigues vector'
'and (x,y,z) 3d point initializations.'
)
parser.add_argument('-i',
'--input_file',
type=str,
default=gtsam.findExampleDataFile(DEFAULT_BAL_DATASET),
help="""Read SFM data from the specified BAL file.
The data format is described here: https://grail.cs.washington.edu/projects/bal/.
BAL files contain (nrPoses, nrPoints, nrObservations), followed by (i,j,u,v) tuples,
then (wx,wy,wz,tx,ty,tz,f,k1,k1) as Bundler camera calibrations w/ Rodrigues vector
and (x,y,z) 3d point initializations.""")
run(parser.parse_args())


if __name__ == "__main__":
main()