-
-
Notifications
You must be signed in to change notification settings - Fork 389
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs: Clarify meaning of named requirements
#2964
Comments
Can you help with it? These are named requirements:
While these are not:
Named dependencies are relative to those defined via URLs. Feel free to pick any proper name. |
Help in what way? I am just trying to get familiar with PDM and competitors (I have minimal Python experience). I think the example you gave is more than enough. Just add that within the docs admonition. If you feel that it's a bit too verbose collapse it by default with Your response did not address the query regarding PyPi being mentioned in the admonition. From my reproduction below, it doesn't seem to restricted to PyPi?
Final query regarding implicit packages is still vague (eg, if I remove the
ReproductionNOTE: If you'd like to reproduce in the same environment I was and you're familiar with Docker.
[project]
name = "example"
dependencies = [
"torch", # Implicitly resolves to `2.3.1+cu121` via configured PyTorch source below
"torchvision",
"torchaudio",
# Implicit packages that should be cached instead of downloaded?
# The PyTorch source needs to include these via `nvidia-*`, otherwise different versions from PyPi are resolved
"nvidia-cublas-cu12",
"nvidia-cuda-cupti-cu12",
"nvidia-cuda-nvrtc-cu12",
"nvidia-cuda-runtime-cu12",
"nvidia-cudnn-cu12",
"nvidia-cufft-cu12",
"nvidia-curand-cu12",
"nvidia-cusolver-cu12",
"nvidia-cusparse-cu12",
"nvidia-nccl-cu12",
"nvidia-nvjitlink-cu12",
"nvidia-nvtx-cu12"
]
requires-python = ">=3.10"
[tool.pdm.resolution]
respect-source-order = true
[tool.pdm]
distribution = false
[[tool.pdm.source]]
name = "pytorch"
url = "https://download.pytorch.org/whl/cu121"
include_packages = ["torch", "torchvision", "torchaudio", "nvidia-*"] # Install PDM if necessary:
curl -sSL https://pdm-project.org/install-pdm.py | python3 -
# PDM cache setup:
pdm cache clear
pdm config install.cache true
pdm config install.cache_method hardlink
# Prepare for minimized/optimized lockfile for `Dockerfile` build:
# NOTE: As a reference to my other timings for install, this takes approx 2 minutes to complete
pdm lock -S no_cross_platform,static_urls
# Install and cache
time pdm install --frozen-lockfile
# Clear the `.venv` to install again, this time with cache:
rm -rf .venv
pdm install --frozen-lockfile $ pdm install --frozen-lockfile
# 1st vs 2nd (cached) install times:
real 1m41.960s
real 1m15.564s
# Cache size:
$ pdm cache info
Cache Root: /root/.cache/pdm, Total size: 4730.6 MB
File Hash Cache: /root/.cache/pdm/hashes
Files: 36, Size: 2.6 kB
HTTP Cache: /root/.cache/pdm/http
Files: 32, Size: 32.8 MB
Wheels Cache: /root/.cache/pdm/wheels
Files: 0, Size: 0 bytes
Metadata Cache: /root/.cache/pdm/metadata
Files: 1, Size: 3.6 kB
Package Cache: /root/.cache/pdm/packages
Packages: 25, Size: 4697.8 MB If not using the $ pdm install --frozen-lockfile
# 1st vs 2nd (cached) install times:
real 1m14.046s
real 0m47.405s
# Cache size:
$ pdm cache info
Cache Root: /root/.cache/pdm, Total size: 6662.2 MB
File Hash Cache: /root/.cache/pdm/hashes
Files: 48, Size: 3.4 kB
HTTP Cache: /root/.cache/pdm/http
Files: 68, Size: 1909.7 MB
Wheels Cache: /root/.cache/pdm/wheels
Files: 0, Size: 0 bytes
Metadata Cache: /root/.cache/pdm/metadata
Files: 1, Size: 4.7 kB
Package Cache: /root/.cache/pdm/packages
Packages: 26, Size: 4752.4 MB This is in contrast to # Clean slate, empty directory:
$ uv cache clean
# 1st time install:
$ uv venv
$ time uv pip install torch==2.3.1+cu121 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
real 1m14.939s
# Similar cache size:
$ du -sx --bytes --si "$(uv cache dir)"
4.7G /root/.cache/uv
# 2nd install (cached):
$ rm -rf .venv
$ uv venv
$ time uv pip install torch==2.3.1+cu121 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
real 0m3.518s Future integration with
|
Help with the docs
PyPI doesn't mean exactly pypi.org, It is a synonym for all packages sources. Again, it is opposed to dependencies with direct URLs.
No network transfer, the cache layer is built inside the HTTP session, so we display In contrast to while |
I can contribute your feedback here to the docs admonition sure 👍
I'm assuming PDM provides no way to get that equivalent Is the lockfile or cache lacking sufficient information to know that it should be able to use the cache?
I am familiar with these as package indexes/sources, and PDM has an If it's not incorrect to refer to them more agnostically with "resolved from a package index (eg: PyPi)" instead of "resolved from PyPi", that would communicate the intent more clearly? (along with the examples you provided for added context). Still the expectation of the cache with PDM not providing the performance benefit should probably also be documented for awareness to quell any potential confusion (seems a few have reported this concern previously). Unless the |
Description
The docs for centralized caching have a note that it's only applicable to named requirements, but this is not well defined?:
"named requirements" has very few results in this repo when I searched for issues with it, while a search engine query for
python "named requirements"
didn't seem to help either with many results containing "file named requirements.txt".Is the intention to refer to explicit dependencies declared? Rather than those that are implicitly installed as a result?
Just to confirm, using another source like PyTorch index for packages there, while these appear to add to the cache (
pdm cache info
), is this documentation note saying something about them not qualifying as cache-friendly because they're packages not sourced from the PyPi index?When I run
pdm install --frozen-lockfile
, despite whatpdm cache info
implies with the 5GB cache, I still see these dependencies install withDownloading xx%
. Even when I explicitly add them to thepyproject.toml
dependencies list. This still happens when I prefer those dependencies to be pulled from PyPi, so I'm not sure if it's actually referring to a network transfer here, or something else is actually happening during this step?The text was updated successfully, but these errors were encountered: