Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

torch.size question in NavPedPreNet() #1

Open
SpikeJishuo opened this issue Jul 2, 2023 · 1 comment
Open

torch.size question in NavPedPreNet() #1

SpikeJishuo opened this issue Jul 2, 2023 · 1 comment

Comments

@SpikeJishuo
Copy link

SpikeJishuo commented Jul 2, 2023

encoded_image = self._encode_image(torch.cat([state[0], state[2]], axis=1))

Could you plz tell me what does state[2] stand for in pedstrians exsisting scenario?
I got torch.size([1, 96, 48]) of torch.cat([state[0], state[2]], axis=1))) instead of ([64, 4, 3, 3]) .

@SnallQiu
Copy link
Member

encoded_image = self._encode_image(torch.cat([state[0], state[2]], axis=1))

Could you plz tell me what does state[2] stand for in pedstrians exsisting scenario? I got torch.size([1, 96, 48]) of torch.cat([state[0], state[2]], axis=1))) instead of ([64, 4, 3, 3]) .

the index of state here is defined in https://github.com/DRL-Navigation/img_env/blob/master/envs/wrapper/filter_states.py

  • state[0]: means sensor_maps, [batch, 1, 48, 48], egocentric sensor map.
  • state[2]: ped_maps, [batch, 3, 48, 48] , egocentric 3-channel pedestrians map, which means x,y velocity and position of the nearby pedestrians.

For more details of the state, you can see readme of https://github.com/DRL-Navigation/img_env.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants