Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Making Naoki's suggested changes and fixing configs #16

Merged
merged 10 commits into from
Oct 16, 2023
3 changes: 2 additions & 1 deletion config/experiments/reality.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# @package _global_

# Copyright (c) 2023 Boston Dynamics AI Institute LLC. All rights reserved.

# @package _global_
defaults:
- /policy: zsos_config_base
- _self_
Expand Down
4 changes: 2 additions & 2 deletions config/experiments/ver_pointnav.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Copyright (c) 2023 Boston Dynamics AI Institute LLC. All rights reserved.

# @package _global_

# Copyright (c) 2023 Boston Dynamics AI Institute LLC. All rights reserved.

defaults:
- /tasks: pointnav_depth_hm3d
- /habitat_baselines: habitat_baselines_rl_config_base
Expand Down
4 changes: 2 additions & 2 deletions config/experiments/vlfm_objectnav_hm3d.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Copyright (c) 2023 Boston Dynamics AI Institute LLC. All rights reserved.

# @package _global_

# Copyright (c) 2023 Boston Dynamics AI Institute LLC. All rights reserved.

defaults:
- /habitat_baselines: habitat_baselines_rl_config_base
- /benchmark/nav/objectnav: objectnav_hm3d
Expand Down
3 changes: 2 additions & 1 deletion vlfm/policy/base_objectnav_policy.py
Original file line number Diff line number Diff line change
Expand Up @@ -350,7 +350,8 @@ def _update_object_map(

# If we are using vqa, then use the BLIP2 model to visually confirm whether
# the contours are actually correct.
if (self._use_vqa is not None) and self._use_vqa:

if self._use_vqa:
contours, _ = cv2.findContours(
object_mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE
)
Expand Down
5 changes: 3 additions & 2 deletions vlfm/vlm/grounding_dino.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,8 @@ def __init__(
self.box_threshold = box_threshold
self.text_threshold = text_threshold

def predict(self, image: np.ndarray, caption: str = "") -> ObjectDetections:
def predict(self, image: np.ndarray, caption: Optional[str] = None) -> ObjectDetections:

"""
This function makes predictions on an input image tensor or numpy array using a
pretrained model.
Expand All @@ -58,7 +59,7 @@ def predict(self, image: np.ndarray, caption: str = "") -> ObjectDetections:
image_transformed = F.normalize(
image_tensor, mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]
)
if caption == "":
if caption is None:
caption_to_use = self.caption
else:
caption_to_use = caption
Expand Down