Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: DRY todo done, add .gitignore(python) for this repo #101

Merged
merged 1 commit into from
Apr 9, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
130 changes: 130 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,130 @@

# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
.python-version

# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock

# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/
26 changes: 7 additions & 19 deletions hub-mirror/hub.py
Original file line number Diff line number Diff line change
Expand Up @@ -86,29 +86,17 @@ def dynamic_list(self):
return self._get_all_repo_names(url)

@functools.lru_cache
def _get_all_repo_names(self, url):
page, per_page = 1, 60
api = url + "?page=0&per_page=" + str(per_page)
def _get_all_repo_names(self, url, page=1):
per_page = 60
api = url + f"?page={page}&per_page=" + str(per_page)
Copy link
Owner

@Yikun Yikun Apr 9, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A quick question: the page start is changed from 0 to 1, start 0 and start1, have same result(include github and gitee), right?

Copy link
Contributor Author

@yihong0618 yihong0618 Apr 9, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think its start 1, start 0 is kind of wrong, because I also found this problem in my other repo

I think GitHub apis may have the same logic.
image

# TODO: src_token support
response = self.session.get(api)
# TODO: DRY
all_items = []
if response.status_code != 200:
print("Repo getting failed: " + response.text)
return []
return all_items
items = response.json()
all_items = []
while items:
if items:
names = [i['name'] for i in items]
all_items += names
items = None
if 'next' in response.links:
url_next = response.links['next']['url']
response = self.session.get(url_next)
# TODO: DRY
if response.status_code != 200:
print("Repo getting failed: " + response.text)
return []
page += 1
items = response.json()

return names + self._get_all_repo_names(url, page=page+1)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice recursion

return all_items