Skip to content

Commit

Permalink
updated notes and ideas on ambiguity after adding noise to vision sim…
Browse files Browse the repository at this point in the history
…, gonna fix unit test tmrw
  • Loading branch information
PGgit08 committed Feb 17, 2025
1 parent 1664802 commit b08e175
Show file tree
Hide file tree
Showing 3 changed files with 25 additions and 10 deletions.
2 changes: 1 addition & 1 deletion notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,4 +21,4 @@ To actually find intrinsics, camera calibration is necessary. A checkerboard pat
Noise is innaccurate measurements that occur from time to time from a sensor or signal. When detecting corners, there may be some noise in the camera sensor, and the detected image-coordinates of a corner can be different than what they are expected to be given the camera's current extrinsics and calibrated intrinsics.

### Ambiguity
When detecting pose using an AprilTag (4 corners), it is often possible to get two pose solutions from solvePnP, each with their own re-projection errors. If the re-projection errors are very close or equal, it is hard to tell which pose solution is the correct one, making the problem ambigious. In this case, the correct pose can be chosen by comparing the two to a different robot sensor, the gyro for example, and determining which one is correct. In terms of filtering, measurements with high ambiguity are typically correlated with high noise (a tag seen at a large distance would cause both ambiguity and corner noise), so those get rejected. At lower ambiguities, the measurement with better re-projection error may not necessarily be the correct measurement. Corner noise may cause for one innaccurate low re-projection error pose along with a more accurate higher re-projection error pose, so the correct one for lower ambiguities should also be determined using the gyro.
Often when detecting a single tag, solvePnP may come up with two candidate pose solutions for that single tag frame. Each solution has its own reprojection error. Ambiguity occurs when both solutions have very similar reprojection error, meaning both poses fit the image corners equally well, and it can be unclear which of the two candidate poses is the "correct" one. When ambiguity is low, the two reprojection errors are very different, and the two poses differ strongly both in translation and heading, so comparing both poses to the gyro heading (which is always accurate) can find the correct pose (lower reprojection error may not necessarily be the correct pose, it just means the projected tag corners are closer to the image tag corners). In the case where ambiguity is high, the two poses are very similar, and their heading distance from the gyro may be very similar, so after gyro disambiguation, the wrong pose may still be chosen (incorrect translation), therefore it's good to ignore higher ambiguity measurements. Higher noise can also increase ambiguity as the lower reprojection error estimate would increase in reprojection error and be closer to the higher reprojection error estimate.
2 changes: 2 additions & 0 deletions src/main/java/frc/robot/utils/VisionPoseEstimator.java
Original file line number Diff line number Diff line change
Expand Up @@ -223,6 +223,8 @@ public VisionPoseEstimator(
if (Robot.isSimulation()) {
var cameraProps = new SimCameraProperties();

cameraProps.setCalibError(0.01, 0.001);

_cameraSim = new PhotonCameraSim(_camera, cameraProps, fieldLayout);
} else {
_cameraSim = null;
Expand Down
31 changes: 22 additions & 9 deletions src/test/java/frc/robot/VisionPoseEstimatorTest.java
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
import edu.wpi.first.math.geometry.Rotation2d;
import edu.wpi.first.math.geometry.Rotation3d;
import edu.wpi.first.math.geometry.Transform3d;
import edu.wpi.first.math.geometry.Translation2d;
import edu.wpi.first.math.geometry.Translation3d;
import edu.wpi.first.math.util.Units;
import frc.robot.utils.VisionPoseEstimator;
Expand Down Expand Up @@ -74,7 +75,9 @@ public void setup() {
3,
7);

_testCam = VisionPoseEstimator.buildFromConstants(testCam, getNtInst(), _fieldLayout);
_testCam =
VisionPoseEstimator.buildFromConstants(
testCam, getNtInst(), _fieldLayout, this::dummyGyroHeading);

_visionSystemSim = new VisionSystemSim("");
_visionSystemSim.addCamera(_testCam.getCameraSim(), _testCam.robotToCam);
Expand All @@ -89,7 +92,7 @@ public void close() {
public void noResults() {
_visionSystemSim.update(Pose2d.kZero);

_testCam.update(this::dummyGyroHeading);
_testCam.update();

// no targets, no new estimates
assertEquals(0, _testCam.getNewEstimates().size());
Expand All @@ -102,7 +105,7 @@ public void singleTagResult() {

_visionSystemSim.update(Pose2d.kZero); // should see the single target

_testCam.update(this::dummyGyroHeading);
_testCam.update();

assertEquals(1, _testCam.getNewEstimates().size());
}
Expand All @@ -115,7 +118,7 @@ public void singleTagResults() {
_visionSystemSim.update(Pose2d.kZero);
_visionSystemSim.update(Pose2d.kZero); // should see two new results of the single target

_testCam.update(this::dummyGyroHeading);
_testCam.update();

assertEquals(2, _testCam.getNewEstimates().size());
}
Expand All @@ -127,7 +130,7 @@ public void singleTagEstimate() {

_visionSystemSim.update(Pose2d.kZero);

_testCam.update(this::dummyGyroHeading);
_testCam.update();

var estimates = _testCam.getNewEstimates(); // 1 estimate
assertEquals(1, estimates.size());
Expand Down Expand Up @@ -167,7 +170,7 @@ public void multiTagEstimate() {
_visionSystemSim.addAprilTags(_fieldLayout);
_visionSystemSim.update(Pose2d.kZero);

_testCam.update(this::dummyGyroHeading);
_testCam.update();

var estimates = _testCam.getNewEstimates(); // 1 estimate
assertEquals(1, estimates.size());
Expand Down Expand Up @@ -220,7 +223,7 @@ public void boundsFilter() {
// float the robot above the ground by 0.3 meters
_visionSystemSim.update(new Pose3d(0, 0, 0.3, Rotation3d.kZero));

_testCam.update(this::dummyGyroHeading);
_testCam.update();

var estimates = _testCam.getNewEstimates(); // 1 estimate
assertEquals(1, estimates.size());
Expand Down Expand Up @@ -266,7 +269,7 @@ public void singleTagDistanceFilter() {

_visionSystemSim.update(Pose2d.kZero);

_testCam.update(this::dummyGyroHeading);
_testCam.update();

var estimates = _testCam.getNewEstimates(); // 1 estimate
assertEquals(1, estimates.size());
Expand Down Expand Up @@ -312,14 +315,24 @@ public void estimateSort() {
for (int i = 3; i > 0; i--) {
newEstimates.add(
new VisionPoseEstimate(
Pose3d.kZero, i, 0.03, new int[] {1}, 1.2, new double[] {0.3, 0.1, 0.2}, true));
Pose3d.kZero,
i,
0.03,
Pose3d.kZero,
new Translation2d[] {Translation2d.kZero},
new int[] {1},
1.2,
new double[] {0.3, 0.1, 0.2},
true));
}

newEstimates.add(
new VisionPoseEstimate(
Pose3d.kZero,
2, // same timestamp case
0.03,
Pose3d.kZero,
new Translation2d[] {Translation2d.kZero},
new int[] {1},
1.2,
new double[] {
Expand Down

0 comments on commit b08e175

Please sign in to comment.