Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: IsADirectoryError: [Errno 21] Is a directory #2411

Open
1 task done
Hanifahreza opened this issue Nov 6, 2024 · 1 comment
Open
1 task done

[Bug]: IsADirectoryError: [Errno 21] Is a directory #2411

Hanifahreza opened this issue Nov 6, 2024 · 1 comment

Comments

@Hanifahreza
Copy link

Describe the bug

I used the code below to train an EfficientAD model for binary anomaly detection task.

# Initialize the datamodule, model and engine
datamodule = Folder(
    name="mf",
    root="/mnt/c/Users/hanif/Documents/renshuu/datasets",
    normal_dir="normals",
    abnormal_dir="anoms",
    task="classification",
    train_batch_size=1,
    eval_batch_size=1,
    seed=42
)

# Setup the datamodule
datamodule.setup()

torch.set_float32_matmul_precision('medium')

model = EfficientAd()
ckpt = ModelCheckpoint(dirpath='models/effad', verbose=True, monitor='image_AUROC', save_top_k=3)
engine = Engine(callbacks=[ckpt], max_epochs=20)

# Train the model
engine.fit(datamodule=datamodule, model=model)

The training completed without any error but when I test it with

engine.test(datamodule=datamodule, model=model)

I got the error below when the testing progress was at 9%.

IsADirectoryError                         Traceback (most recent call last)
Cell In[24], line 22
      1 # engine2 = Engine(
      2 #         image_metrics={
      3 #             "Accuracy": {
   (...)
     20 #         task='classification'
     21 #     )
---> 22 engine.test(datamodule=datamodule, model=model)

File ~/miniconda3/envs/glass_env/lib/python3.11/site-packages/anomalib/engine/engine.py:688, in Engine.test(self, model, dataloaders, ckpt_path, verbose, datamodule)
    686     logger.info("Running validation before testing to collect normalization metrics and/or thresholds.")
    687     self.trainer.validate(model, dataloaders, None, verbose=False, datamodule=datamodule)
--> 688 return self.trainer.test(model, dataloaders, ckpt_path, verbose, datamodule)

File ~/miniconda3/envs/glass_env/lib/python3.11/site-packages/lightning/pytorch/trainer/trainer.py:742, in Trainer.test(self, model, dataloaders, ckpt_path, verbose, datamodule)
    740     self.strategy._lightning_module = model
    741 _verify_strategy_supports_compile(self.lightning_module, self.strategy)
--> 742 return call._call_and_handle_interrupt(
    743     self, self._test_impl, model, dataloaders, ckpt_path, verbose, datamodule
    744 )

File ~/miniconda3/envs/glass_env/lib/python3.11/site-packages/lightning/pytorch/trainer/call.py:43, in _call_and_handle_interrupt(trainer, trainer_fn, *args, **kwargs)
     41     if trainer.strategy.launcher is not None:
...
  File "/home/hanif/miniconda3/envs/glass_env/lib/python3.11/site-packages/PIL/Image.py", line 3431, in open
    fp = builtins.open(filename, "rb")
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
IsADirectoryError: [Errno 21] Is a directory: '/mnt/c/Users/hanif/Documents/renshuu'

/mnt/c/Users/hanif/Documents/renshuu is the root directory of the ipynb file I'm working on. From what I understand, this error happened because I didn't specify any folder for the segmentation ground truth. But I don't understand why this is an issue since I have specified the task to be classification so it shouldn't be needed.

Dataset

Other (please specify in the text field below)

Model

Other (please specify in the field below)

Steps to reproduce the behavior

As explained in the description.

OS information

OS information:

  • OS: Windows 11 with WSL Ubuntu 24.04.1 LTS
  • Python version: 3.11.10
  • Anomalib version: 1.1.1
  • PyTorch version: 2.5.1
  • CUDA/cuDNN version: 12.6
  • GPU models and configuration: GeForce RTX 3080
  • Dataset: Custom
  • Model: EfficientAD

Expected behavior

The test to work.

Screenshots

No response

Pip/GitHub

pip

What version/branch did you use?

No response

Configuration YAML

None

Logs

---------------------------------------------------------------------------
IsADirectoryError                         Traceback (most recent call last)
Cell In[24], line 22
      1 # engine2 = Engine(
      2 #         image_metrics={
      3 #             "Accuracy": {
   (...)
     20 #         task='classification'
     21 #     )
---> 22 engine.test(datamodule=datamodule, model=model)

File ~/miniconda3/envs/glass_env/lib/python3.11/site-packages/anomalib/engine/engine.py:688, in Engine.test(self, model, dataloaders, ckpt_path, verbose, datamodule)
    686     logger.info("Running validation before testing to collect normalization metrics and/or thresholds.")
    687     self.trainer.validate(model, dataloaders, None, verbose=False, datamodule=datamodule)
--> 688 return self.trainer.test(model, dataloaders, ckpt_path, verbose, datamodule)

File ~/miniconda3/envs/glass_env/lib/python3.11/site-packages/lightning/pytorch/trainer/trainer.py:742, in Trainer.test(self, model, dataloaders, ckpt_path, verbose, datamodule)
    740     self.strategy._lightning_module = model
    741 _verify_strategy_supports_compile(self.lightning_module, self.strategy)
--> 742 return call._call_and_handle_interrupt(
    743     self, self._test_impl, model, dataloaders, ckpt_path, verbose, datamodule
    744 )

File ~/miniconda3/envs/glass_env/lib/python3.11/site-packages/lightning/pytorch/trainer/call.py:43, in _call_and_handle_interrupt(trainer, trainer_fn, *args, **kwargs)
     41     if trainer.strategy.launcher is not None:
     42         return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
---> 43     return trainer_fn(*args, **kwargs)
     45 except _TunerExitException:
     46     _call_teardown_hook(trainer)

File ~/miniconda3/envs/glass_env/lib/python3.11/site-packages/lightning/pytorch/trainer/trainer.py:785, in Trainer._test_impl(self, model, dataloaders, ckpt_path, verbose, datamodule)
    780 self._data_connector.attach_data(model, test_dataloaders=dataloaders, datamodule=datamodule)
    782 ckpt_path = self._checkpoint_connector._select_ckpt_path(
    783     self.state.fn, ckpt_path, model_provided=model_provided, model_connected=self.lightning_module is not None
    784 )
--> 785 results = self._run(model, ckpt_path=ckpt_path)
    786 # remove the tensors from the test results
    787 results = convert_tensors_to_scalars(results)

File ~/miniconda3/envs/glass_env/lib/python3.11/site-packages/lightning/pytorch/trainer/trainer.py:980, in Trainer._run(self, model, ckpt_path)
    975 self._signal_connector.register_signal_handlers()
    977 # ----------------------------
    978 # RUN THE TRAINER
    979 # ----------------------------
--> 980 results = self._run_stage()
    982 # ----------------------------
    983 # POST-Training CLEAN UP
    984 # ----------------------------
    985 log.debug(f"{self.__class__.__name__}: trainer tearing down")

File ~/miniconda3/envs/glass_env/lib/python3.11/site-packages/lightning/pytorch/trainer/trainer.py:1016, in Trainer._run_stage(self)
   1013 self.strategy.barrier("run-stage")
   1015 if self.evaluating:
-> 1016     return self._evaluation_loop.run()
   1017 if self.predicting:
   1018     return self.predict_loop.run()

File ~/miniconda3/envs/glass_env/lib/python3.11/site-packages/lightning/pytorch/loops/utilities.py:181, in _no_grad_context.<locals>._decorator(self, *args, **kwargs)
    179     context_manager = torch.no_grad
    180 with context_manager():
--> 181     return loop_run(self, *args, **kwargs)

File ~/miniconda3/envs/glass_env/lib/python3.11/site-packages/lightning/pytorch/loops/evaluation_loop.py:108, in _EvaluationLoop.run(self)
    106 while True:
    107     try:
--> 108         batch, batch_idx, dataloader_idx = next(data_fetcher)
    109         self.batch_progress.is_last_batch = data_fetcher.done
    110         if previous_dataloader_idx != dataloader_idx:
    111             # the dataloader has changed, notify the logger connector

File ~/miniconda3/envs/glass_env/lib/python3.11/site-packages/lightning/pytorch/loops/fetchers.py:126, in _PrefetchDataFetcher.__next__(self)
    123         self.done = not self.batches
    124 elif not self.done:
    125     # this will run only when no pre-fetching was done.
--> 126     batch = super().__next__()
    127 else:
    128     # the iterator is empty
    129     raise StopIteration

File ~/miniconda3/envs/glass_env/lib/python3.11/site-packages/lightning/pytorch/loops/fetchers.py:58, in _DataFetcher.__next__(self)
     56 self._start_profiler()
     57 try:
---> 58     batch = next(self.iterator)
     59 except StopIteration:
     60     self.done = True

File ~/miniconda3/envs/glass_env/lib/python3.11/site-packages/lightning/pytorch/utilities/combined_loader.py:285, in CombinedLoader.__next__(self)
    283 def __next__(self) -> Any:
    284     assert self._iterator is not None
--> 285     out = next(self._iterator)
    286     if isinstance(self._iterator, _Sequential):
    287         return out

File ~/miniconda3/envs/glass_env/lib/python3.11/site-packages/lightning/pytorch/utilities/combined_loader.py:123, in _Sequential.__next__(self)
    120             raise StopIteration
    122 try:
--> 123     out = next(self.iterators[0])
    124     index = self._idx
    125     self._idx += 1

File ~/miniconda3/envs/glass_env/lib/python3.11/site-packages/torch/utils/data/dataloader.py:701, in _BaseDataLoaderIter.__next__(self)
    698 if self._sampler_iter is None:
    699     # TODO(https://github.com/pytorch/pytorch/issues/76750)
    700     self._reset()  # type: ignore[call-arg]
--> 701 data = self._next_data()
    702 self._num_yielded += 1
    703 if (
    704     self._dataset_kind == _DatasetKind.Iterable
    705     and self._IterableDataset_len_called is not None
    706     and self._num_yielded > self._IterableDataset_len_called
    707 ):

File ~/miniconda3/envs/glass_env/lib/python3.11/site-packages/torch/utils/data/dataloader.py:1465, in _MultiProcessingDataLoaderIter._next_data(self)
   1463 else:
   1464     del self._task_info[idx]
-> 1465     return self._process_data(data)

File ~/miniconda3/envs/glass_env/lib/python3.11/site-packages/torch/utils/data/dataloader.py:1491, in _MultiProcessingDataLoaderIter._process_data(self, data)
   1489 self._try_put_index()
   1490 if isinstance(data, ExceptionWrapper):
-> 1491     data.reraise()
   1492 return data

File ~/miniconda3/envs/glass_env/lib/python3.11/site-packages/torch/_utils.py:715, in ExceptionWrapper.reraise(self)
    711 except TypeError:
    712     # If the exception takes multiple arguments, don't try to
    713     # instantiate since we don't know how to
    714     raise RuntimeError(msg) from None
--> 715 raise exception

IsADirectoryError: Caught IsADirectoryError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/home/hanif/miniconda3/envs/glass_env/lib/python3.11/site-packages/torch/utils/data/_utils/worker.py", line 351, in _worker_loop
    data = fetcher.fetch(index)  # type: ignore[possibly-undefined]
           ^^^^^^^^^^^^^^^^^^^^
  File "/home/hanif/miniconda3/envs/glass_env/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/hanif/miniconda3/envs/glass_env/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 52, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
            ~~~~~~~~~~~~^^^^^
  File "/home/hanif/miniconda3/envs/glass_env/lib/python3.11/site-packages/anomalib/data/base/dataset.py", line 180, in __getitem__
    else read_mask(mask_path, as_tensor=True)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/hanif/miniconda3/envs/glass_env/lib/python3.11/site-packages/anomalib/data/utils/image.py", line 373, in read_mask
    image = Image.open(path).convert("L")
            ^^^^^^^^^^^^^^^^
  File "/home/hanif/miniconda3/envs/glass_env/lib/python3.11/site-packages/PIL/Image.py", line 3431, in open
    fp = builtins.open(filename, "rb")
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
IsADirectoryError: [Errno 21] Is a directory: '/mnt/c/Users/hanif/Documents/renshuu'

Code of Conduct

  • I agree to follow this project's Code of Conduct
@samet-akcay
Copy link
Contributor

when you call engine.test(...), it saves the output images into file system. Can you check if you have write access to this path. If not, it might explain the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants