You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After executing the code and Carla starts, it gives me an error... this is the full log Traceback (most recent call last): File "run_RL.py", line 89, in <module> args.host, args.port) File "/media/bignrz/World/carla simulator/CARLA_0.8.2/PythonClient/carla/driving_benchmark/driving_benchmark.py", line 294, in run_driving_benchmark benchmark_summary = benchmark.benchmark_agent(experiment_suite, agent, client) File "/media/bignrz/World/carla simulator/CARLA_0.8.2/PythonClient/carla/driving_benchmark/driving_benchmark.py", line 129, in benchmark_agent + '.' + str(end_index)) File "/media/bignrz/World/carla simulator/CARLA_0.8.2/PythonClient/carla/driving_benchmark/driving_benchmark.py", line 227, in _run_navigation_episode control = agent.run_step(measurements, sensor_data, directions, target) File "/media/bignrz/World/projects/carla RL envs/reinforcement-learning/agent/runnable_model.py", line 35, in run_step action_idx = self.actor.act(obs_preprocessed=obs_preprocessed) File "/media/bignrz/World/projects/carla RL envs/reinforcement-learning/agent/asyncrl/a3c.py", line 49, in act action = pout.action_indices[0] File "/home/bignrz/.local/lib/python3.6/site-packages/cached_property.py", line 35, in __get__ value = obj.__dict__[self.func.__name__] = self.func(obj) File "/media/bignrz/World/projects/carla RL envs/reinforcement-learning/agent/asyncrl/policy_output.py", line 51, in action_indices return _sample_discrete_actions(self.probs.data) File "/media/bignrz/World/projects/carla RL envs/reinforcement-learning/agent/asyncrl/policy_output.py", line 27, in _sample_discrete_actions histogram = np.random.multinomial(1, batch_probs[i]) File "mtrand.pyx", line 4199, in numpy.random.mtrand.RandomState.multinomial File "_common.pyx", line 324, in numpy.random._common.check_array_constraint ValueError: pvals < 0, pvals > 1 or pvals contains NaNs
any help please.
The text was updated successfully, but these errors were encountered:
After executing the code and Carla starts, it gives me an error... this is the full log
Traceback (most recent call last): File "run_RL.py", line 89, in <module> args.host, args.port) File "/media/bignrz/World/carla simulator/CARLA_0.8.2/PythonClient/carla/driving_benchmark/driving_benchmark.py", line 294, in run_driving_benchmark benchmark_summary = benchmark.benchmark_agent(experiment_suite, agent, client) File "/media/bignrz/World/carla simulator/CARLA_0.8.2/PythonClient/carla/driving_benchmark/driving_benchmark.py", line 129, in benchmark_agent + '.' + str(end_index)) File "/media/bignrz/World/carla simulator/CARLA_0.8.2/PythonClient/carla/driving_benchmark/driving_benchmark.py", line 227, in _run_navigation_episode control = agent.run_step(measurements, sensor_data, directions, target) File "/media/bignrz/World/projects/carla RL envs/reinforcement-learning/agent/runnable_model.py", line 35, in run_step action_idx = self.actor.act(obs_preprocessed=obs_preprocessed) File "/media/bignrz/World/projects/carla RL envs/reinforcement-learning/agent/asyncrl/a3c.py", line 49, in act action = pout.action_indices[0] File "/home/bignrz/.local/lib/python3.6/site-packages/cached_property.py", line 35, in __get__ value = obj.__dict__[self.func.__name__] = self.func(obj) File "/media/bignrz/World/projects/carla RL envs/reinforcement-learning/agent/asyncrl/policy_output.py", line 51, in action_indices return _sample_discrete_actions(self.probs.data) File "/media/bignrz/World/projects/carla RL envs/reinforcement-learning/agent/asyncrl/policy_output.py", line 27, in _sample_discrete_actions histogram = np.random.multinomial(1, batch_probs[i]) File "mtrand.pyx", line 4199, in numpy.random.mtrand.RandomState.multinomial File "_common.pyx", line 324, in numpy.random._common.check_array_constraint ValueError: pvals < 0, pvals > 1 or pvals contains NaNs
any help please.
The text was updated successfully, but these errors were encountered: