Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release/1.5.0 #4516

Merged
merged 71 commits into from
Nov 13, 2020
Merged

Release/1.5.0 #4516

merged 71 commits into from
Nov 13, 2020

Conversation

ramirezfranciscof
Copy link
Member

Setting up the branch and PR for the next release. There are still some more changes to incorporate I think, but at least we can start checking out the stuff and organizing the CHANGELOG (for example, how are we going to report all the changes in the docs? I don't think it is worth to put the details of all commits as it is now).

I couldn't manage to get the script API_clients/github_pr_for_release.py to work (is this python 2?), so I just ran git log --pretty=oneline v1.4.2.. and made the changes on issue number manually.

sphuber and others added 30 commits September 24, 2020 14:51
Python 3.5 is EOL as of September 13 2020. CI testing will now only be
done against Python 3.6 and 3.8.
…ink (aiidateam#4388)

The link manager was always referring to an 'input link' while it should
instead refer on an 'input link label' or 'output link label' depending
on the value of the link direction, determined by the `self._incoming`
attribute.
…iidateam#4399)

The return value of `Node.open` was wrapped in `WarnWhenNotEntered` in
`aiida-core==1.4.0` in order to warn users that use the method without a
context manager, which will start to raise in v2.0. Unfortunately, the
raising came a little early as the wrapper does not implement the
`__iter__` and `__next__` methods, which can be called by clients.

An example is `numpy.getfromtxt` which will notice the return value of
`Node.open` is filelike and so will wrap it in `iter`. Without the
current fix, this raises a `TypeError`. The proper fix would be to
forward all magic methods to the wrapped filelike object, but it is not
clear how to do this.
…iidateam#4398)

The patch release of `plumpy` comes with a simple fix that will prevent
the printing of many warnings when running processes. So although not
critical, it does improve user experience.
…#4405)

The options for the message broker configuration do define defaults,
however, the interactive clones for `verdi setup`, which are defined in
`aiida.cmdline.params.options.commands.setup` override the default with
the `contextual_default` which sets an empty default, unless it is taken
from an existing profile. The result is that for new profiles, the
broker options do not specify a default, even though for most usecases
the defaults will be required. After the changes of this commit, the
prompt of `verdi setup` will provide a default for all broker parameters
so most users will simply have to press enter each time.
…ost (aiidateam#4408)

The help string of the `--broker-virtual-host` option of `verdi setup`
incorrectly said that forward slashes have to be escaped but this is not
true. The code will escape any characters necessary when constructing
the URL to connect to RabbitMQ. On top of that, slashes would fail the
validation outright, even though these are common in virtual hosts. For
example the virtual host always starts with a leading forward slash, but
our validation would reject it. Also the leading slash will be added by
the code and so does not have to be used in the setup phase. The help
string and the documentation now reflect this.

The exacti naming rules for virtual hosts, imposed by RabbitMQ or other
implemenatations of the AMQP protocol, are not fully clear. But instead
of putting an explicit validation on AiiDA's side and running the risk
that we incorrectly reject valid virtual host names, we simply accept
all strings. In any case, any non-default virtual host will have to be
created through RabbitMQ's control interface, which will perform the
validation itself.
This is supported by `pylint` as of v2.5.
…idateam#4407)

The command was showing the called subprocesses in a random order and
used the node type, which is often uninformative. For example, all
workchains are always shown as `WorkChainNode`. By using the process
label instead, which is more specific, and ordering the called nodes by
creation time, the list gives a more natural overview of the order in
which the subprocesses were called.
…l` (aiidateam#4410)

Starting from v6.0, `pytest` supports using the `pyproject.toml` instead
of a `pytest.ini` to define its configuration. Given that this is
quickly becoming the Python packaging standard and allows us to reduce
the number of configuration files in the top level of the repository, we
increase the version requirement of `pytest`.

Note that we also require `pytest-rerunfailures>=9.1.1` because lower
versions are broken in combination with `pytest==6.1`. See the following:

   pytest-dev/pytest-rerunfailures#128

for details.
…m#4413)

The project diff percentage is the change in coverage w.r.t. all lines
in the project, whereas the patch diff percentage is the change in
coverage w.r.t. only lines touched by the PR. The patch threshold is
currently defaulting to 0%, hence it is very easy to fail. By raising it
to 0.1% it should now only fail when there is a significant reduction
in coverage. Number may need to be further tweaked.
Since Python 3.5 is no longer supported, format string interpolations
can now be replaced by f-strings, introduced in Python 3.6, which are
more readable, require less characters and are more efficient.

Note that `pylint` issues a warning when using f-strings for log
messages, just as it does for format interpolated strings. The reasoning
is that this is slightly inefficient as the strings are always
interpolated even if the log is discarded, but also by not passing the
formatting parameters as arguments, the available metadata is reduced.
I feel these inefficiencies are premature optimizations as they are
really minimal and don't weigh up against the improved readability and
maintainability of using f-strings. That is why the `pylint` config is
update to ignore the warning `logging-fstring-interpolation` which
replaces `logging-format-interpolation` that was ignored before.

The majority of the conversions were done automatically with the linting
tool `flynt` which is also added as a pre-commit hook. It is added
before the `yapf` step because since `flynt` will touch formatting,
`yapf` will then get a chance to check it.
…MQ (aiidateam#4375)

* Using `pip install -e .` for tox runs improves startup time for tests
   by preventing unnecessary copy of files.

* The docker-compose yml file allows to set up an isolated RabbitMQ
   instance for local CI testing.
aiidateam#4419)

The builder object was already able to delete set inputs through the
`__delitem__` method, but `__delattr__` was not implemented causing
`del builder.input_name` to raise. This is not consistent with how these
inputs can be set or accessed as both `__getattr__` and `__setattr__`
are implemented. Implementing `__delattr__` brings the implementation
up to par for all attribute methods.
…nk_list` (aiidateam#4416)

The `upload_calculation` transport task would fail if either the
`remote_copy_list` or `remote_symlink_list` contained a target filepath
that had a nested directory that did not exist yet in the remote working
directory. Instead of inspecting the file system or creating the folders
remotely each time a nested target path is encountered, which would incur
a potentially expensive operation over the transport each time, the
directory hierarchy is first created in the local sandbox folder before
it is copied recursively to the remote in a single shot.
The how to on supporting external codes in AiiDA did not link to the
packaging guide. While the how to is great for learning how AiiDA works,
developers who want to get started as quickly as possible are better off
using the AiiDA plugin cutter.
Add docs live build using sphinx-autobuild. This dramatically speeds up the process of checking the rendered documentation while editing.

Co-authored-by: Leopold Talirz <leopold.talirz@gmail.com>
Uses the `sphinxext-rediraffe` Sphinx extension to automatically create
redirects when documentation pages are moved and therefore their URLs
change. New redirect rules should be added to `docs/source/redirects.txt`
There was no centralized location yet that explains how a plugin package
is installed, yet this is one of the most critical first steps for new
AiiDA users. A new how-to section is created "How to install plugins"
that details this information. Since a file `plugins.rst` already
existed, which contains information on how to develop a plugin package,
it is renamed to `plugins_develop.rst`.
Content is mostly moved from older existing documentation.
This commit primarily upgrades the sphinx dependency from sphinx v2 to v3, allowing for other upgrades of sphinx version pinning.

It also moved the `aiida/sphinxext` testing to the official sphinx testing infrastructure, and fixes an issue with the automodule writer. However, the automodule functionality cannot yet be re-instated, due to issues with referencing of these objects.
The base URL of the REST API was returning a 404 invalid URL response
without providing any guidance to new users as to how to use the API.
We change this to return the list of endpoints formerly available only
under /server/endpoints.

Documentation of where to find the list of endpoints -- which seems
to have been entirely deleted -- is added.

Co-authored-by: Giovanni Pizzi <gio.piz@gmail.com>
Instead of copying the template file inline, links are provided to the
same template on the MARVEL NCCR repository that is used for the Quantum
Mobile.
The scripts were updated to work with the AiiDA v1.0 interface. The
example for `AuthInfo` was also simplified as the `Computer` class now
provides the `get_authinfo` method.
This information is felt to be too detailed for the how-to section and
it breaks the flow too much.
The "how to share data" section includes instructions both for dealing
with AiiDA archives (e.g. for publishing AiiDA graphs alongside your
publication) and for using the AiiDA REST API.

Co-authored-by: Casper Welzel Andersen <43357585+CasperWA@users.noreply.github.com>
We have agreed on the terms "AiiDA archive (file)" and "AiiDA archive
format".

Co-authored-by: Casper Welzel Andersen <43357585+CasperWA@users.noreply.github.com>
This section gives a high-level overview of the various methods with
which can interact with AiiDA. For details it refers to relevant
sections with more information.
xorJane and others added 2 commits November 7, 2020 18:05
The limitation that process functions were not submittable, meaning they
could not be sent to a daemon worker but could only be run by the current
interpreter, was a historical one. Before the introduction of the system
of processes in v1.0, a `calcfunction` was nothing more than the
execution of a normal function. However, now, a process function creates
a `Process` instance in the background, just as any other process. This
means it can also be serialized and deserialized by a daemon worker.

Here we remove the limitation of process functions not being submittable
simply by removing the check. Note that there is no need to change the
implementation other than adding two attributes on the decorated function
that specify the corresponding process class and the method that allows
to recreate the instance from the serialized instance.

Co-authored-by: Sebastiaan Huber <mail@sphuber.net>
@ramirezfranciscof ramirezfranciscof force-pushed the release/1.5.0 branch 2 times, most recently from e5639c0 to 4f651b8 Compare November 11, 2020 08:32
CHANGELOG.md Outdated Show resolved Hide resolved
CHANGELOG.md Outdated Show resolved Hide resolved
CHANGELOG.md Outdated Show resolved Hide resolved
chrisjsewell and others added 3 commits November 11, 2020 16:35
)

The `SlurmJobResource` resource class used by the `SlurmScheduler`
plugin contained a bug in the `validate_resources` methods that would
cause a float value to be set for the `num_cores_per_mpiproc` field in
certain cases. This would cause the submit script to fail because SLURM
only accepts integers for the corresponding `--ncpus-per-task` flag.

The reason is that the code was incorrectly using `isinstance(_, int)`
to check that the divison of `num_cores_per_machine` over
`num_mpiprocs_per_machine` is an integer. In addition to the negation
missing in the conditional, this is not the correct way of checking
whether a division is an integer. Instead it should check that the value
is identical after it is cast to `int`.
…ateam#4357)

Note that the option is still there but no longer makes a difference. It
now merely prints a deprecation warning, but is otherwise ignored. The
reason is that otherwise, users would be forced to continue to use it
despite it raising a deprecation warning. The only danger is for users
that have come to depend on the slightly weird behavior that in order
to delete non-empty groups, one would have to pass them `--clear` option
otherwise the command would fail. After this change, this would now
delete the group without complaining, which may break this use case.
This use case was estimate to be unlikely and so it was accepted to
simply ignore the option.
This PR builds on aiidateam#4448,
with the goal of improving both the export writer API
(allowing for "streamed" data writing)
and performance of the export process (CPU and memory usage).

The writer is now used as a context manager,
rather than passing all data to it after extraction of the data from the AiiDA database.
This means it is called throughout the export process,
and will allow for less data to be kept in RAM when moving to a new archive format.

The number of database queries has also been reduced, resulting in a faster process.

Lastly, code for read/writes to the archive has been moved to the https://github.com/aiidateam/archive-path package.
This standardises the interface for both zip and tar, and
especially for export to tar, provides much improved performance,
since the data is now written directly to the archive
(rather than writing to a folder then only compressing at the end).

Co-authored-by: Leopold Talirz <leopold.talirz@gmail.com>
Copy link
Member

@chrisjsewell chrisjsewell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Basically copied from the update in the last aiida team meeting 😄

CHANGELOG.md Outdated Show resolved Hide resolved
CHANGELOG.md Outdated Show resolved Hide resolved
This commit moves the mypy execution to run in the full aiida-core python environment.

Currently, the mypy in the pre-commit is used as a "non-local" import
and adds the blanket `--ignore-missing-imports` flag.
This greatly reduces the effectiveness of the type checking, because it does not check any types from classes/functions imported from third-party packages.

Similarly, adding `check_untyped_defs = True` improves the checking coverage
(see https://mypy.readthedocs.io/en/stable/common_issues.html#no-errors-reported-for-obviously-wrong-code).
This commit is a small iterative improvement to the archive import logic,
added to reduce memory overhead,
by reducing the number of variables in memory at any one time
Fixes a bug whereby archives created with the latest code
fail to import in the last v1.4.2 release (if they contain group extras).
This update imposes that these new archives are no longer compatible with v1.4.2
Copy link
Member

@chrisjsewell chrisjsewell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add final merged PRs

CHANGELOG.md Show resolved Hide resolved
CHANGELOG.md Show resolved Hide resolved
The `process_type` attribute has changed over the years; currently it
must have some descriptor for processes and be None for data types.
Apparently this has not only been the case, and thus old databases may
have both data and process nodes with either empty strings ('') and/or
None entries in their `process_type` attributes.

Additionally, there were some problems with how the unregistered entry
points were considered that made it impossible to query for them.

In order to consider all of this when filtering and doing statistics,
it has been decided to:

1) Group all instances of a given node_type that have either '' or None
as their process_type in the same `full_type` (`node_type|` ) and hence
always query for both when the `process_type` is missing.

2) Remove the `aiida.descriptor:` and the `no-entry-point` from the
`process_type` part of unregistered processes. This was interfeering
when the `full_type` was given to return the filtering options to query
for these processes.

Tests were adapted to test this new compatibility aspects.
This feature returns a namespace tree of the available node types in the
database (data node_types + process process_types) with the addition of
a count at each leaf / branch. It also has the option of doing so for a
single user, if the pk is provided as an option.
@chrisjsewell chrisjsewell self-requested a review November 13, 2020 15:01
@sphuber sphuber merged commit f945dca into aiidateam:master Nov 13, 2020
@sphuber sphuber deleted the release/1.5.0 branch November 13, 2020 15:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.