Skip to content

Latest commit

 

History

History
2869 lines (2272 loc) · 207 KB

CHANGELOG.md

File metadata and controls

2869 lines (2272 loc) · 207 KB

Changelog

v2.4.1 - 2023-11-15

This patch release comes with an improved set of Docker images and a few fixes to provide compatibility with recent versions of pymatgen.

Docker

  • Improved Docker images [fec4e3bc4]
  • Add folders that automatically run scripts before/after daemon start in Docker image [fe4bc1d3d]
  • Pass environment variable to aiida-prepare script in Docker image [ea47668ea]
  • Update the .devcontainer to use the new docker stack [413a0db65]

Dependencies

  • Add compatibility for pymatgen>=v2023.9.2 [1f6027f06]

Devops

  • Tests: Make PsqlDosStorage profile unload test more robust [f392459bd]
  • Tests: Fix StructureData test breaking for recent pymatgen versions [093037d48]
  • Trigger Docker image build when pushing to support/* branch [5cf3d1d75]
  • Use aiida-core-base image from ghcr.io [0e5b1c747]
  • Loosen trigger conditions for Docker build CI workflow [22e8a8069]
  • Follow-up docker build runner macOS-ARM64 [1bd9bf03d]
  • Upload artifact by PR from forks for docker workflow [afc2dad8a]
  • Update the image name for docker image [17507b410]

v2.4.0 - 2023-06-22

This minor release comes with a number of new features and improvements as well as a significant amount of bug fixes. Support for Python 3.8 has been officially dropped in accordance with AEP 003.

As a result of one of the bug fixes, related to the caching of CalcJob nodes, a database migration had to be added, the first since the release of v2.0. After ugrading to v2.4.0, you will be prompted to migrate your database. The automated migration drops the hashes of existing CalcJobNodes and provides you with the optional command to recompute them. Execute the command if existing CalcJobNodes need to be usable as valid cache sources.

Features

  • Config: Add option to change recursion limit in daemon workers [226159fd9]
  • CLI: Added compress option to verdi storage maintain [add474cbb]
  • Expose get_daemon_client so it can be imported from aiida.engine [1a0c1ee93]
  • verdi computer test: Improve messaging of login shell check [062a58260]
  • verdi node rehash: Add aiida.node as group for --entry-point [2fd07514d]
  • verdi process status: Add call_link_label to stack entries [bd9372a5f]
  • SinglefileData: Add the from_string classmethod [c25de615e]
  • DynamicEntryPointCommandGroup: Add support for shared options [220a65c76]
  • DynamicEntryPointCommandGroup: Pass ctx to command callable [7de711be4]
  • ProcessNode: Add the exit_code property [ad8a539ee]

Fixes

  • Engine: Dynamically update maximum stack size close to overflow to address RecursionError under heavy load [f797b4766]
  • CalcJobNode: Fix the computation of the hash [685e0f87d]
  • CalcJob: Ignore file in remote_copy_list not existing [101a8d61b]
  • CalcJob: Assign outputs from node in case of cache hit [777b97601]
  • Fix log messages being logged twice to the daemon log file [bfd63c790]
  • Process control: Change language when not waiting for response [68cb4579d]
  • Do not assume pgtest cluster started in postgres_cluster fixture [1de2ca576]
  • Process control: Warn instead of except when daemon is not running [ad4fbcccb]
  • DirectScheduler: Add ? as JobState.UNDETERMINED [ffc869d8f]
  • CLI: Correct verdi devel rabbitmq tasks revive docstring [13cadd05f]
  • SinglefileData: Fix bug when filename is pathlib.Path [f36bf583c]
  • Improve clarity of various deprecation warnings [c72a252ed]
  • CalcJob: Remove default of withmpi input and make it optional [6a88cb315]
  • Process: Have inputs property always return AttributesFrozenDict [60756fe30]
  • PsqlDos: Add migration to remove hashes for all CalcJobNodes [7ad916836]
  • PsqlDosMigrator: Commit changes when migrating existing schema [f84fe5b60]
  • PsqlDos: Add entry_point_string argument to drop_hashes [c7a36fa3d]
  • PsqlDos: Make hash reset migrations more explicit [c447a1af3]
  • verdi process list: Fix double percent sign in daemon usage [68be866e6]
  • Fix the daemon_client fixture [9e5f5eefd]
  • Transports: Raise FileNotFoundError in copy if source doesn't exist [d82069441]

Devops

  • Add graphviz to system requirements of RTD build runner [3df02550e]
  • Add types for DefaultFieldsAttributeDict subclasses [afed5dc46]
  • Bump Python version for RTD build [5df446cd3]
  • Pre-commit: Fix mypy warning in aiida.orm.utils.serialize [c25922484]
  • Update Docker base image aiida-prerequisites==0.7.0 [ac755afae]
  • Use f-strings in aiida/engine/daemon/execmanager.py [49cffff21]

Dependencies

Deprecations

  • QueryBuilder: Deprecate debug argument and use logger [603ff37a0]

Documentation

  • Add missing core. prefix to all verdi data subcommands [99319b3c1]
  • Clarify negation operator in QueryBuilder filters [2c828811f]
  • Correct "variable" to "variadic" arguments [978217693]
  • Fix reference target warnings related to flask_restful [4f76e0bd7]

v2.3.1 - 2023-05-22

Fixes

  • DaemonClient: Clean stale PID file in stop_daemon [#6007]

v2.3.0 - 2023-04-17

This release comes with a number of improvements, some of the more useful and important of which are quickly highlighted. A full list of changes can be found below.

Process function improvements

A number of improvements in the usage of process functions, i.e., calcfunction and workfunction, have been added. Each subsection title is a link to the documentation for more details.

Variadic arguments can be used in case the function should accept a list of inputs of unknown length. Consider the example of a calculation function that computes the average of a number of Int nodes:

@calcfunction
def average(*args):
    return sum(args) / len(args)

result = average(*(1, 2, 3))

Type hint annotations can now be used to add automatic type validation to process functions.

@calcfunction
def add(x: Int, y: Int):
    return x + y

add(1, 1.0)  # Passes
add(1, '1.0')  # Raises an exception

Since the Python base types (int, str, bool, etc.) are automatically serialized, these can also be used in type hints. The following example is therefore identical to the previous:

@calcfunction
def add(x: int, y: int):
    return x + y

The calcfunction and workfunction generate a Process of the decorated function on-the-fly. In doing so, it automatically defines the ProcessSpec that is normally done manually, such as for a CalcJob or a WorkChain. Before, this would just define the ports that the function process accepts, but the help attribute of the port would be left empty. This is now parsed from the docstring, if it can be correctly parsed:

@calcfunction
def add(x: int, y: int):
    """Add two integers.

    :param x: Left hand operand.
    :param y: Right hand operand.
    """
    return x + y

assert add.spec().inputs['a'].help == 'Left hand operand.'
assert add.spec().inputs['b'].help == 'Right hand operand.'

This functionality is particularly useful when exposing process functions in work chains. Since the process specification of the exposed function will be automatically inherited, the user can inspect the help string through the builder. The automatic documentation produced by the Sphinx plugin will now also display the help string parsed from the docstring.

The keys in the output dictionary can now contain nested namespaces:

@calcfunction
def add(alpha, beta):
    return {'nested.sum': alpha + beta}

result = add(Int(1), Int(2))
assert result['nested']['sum'] == 3

Process functions can now be defined as class member methods of work chains:

class CalcFunctionWorkChain(WorkChain):

    @classmethod
    def define(cls, spec):
        super().define(spec)
        spec.input('x')
        spec.input('y')
        spec.output('sum')
        spec.outline(
            cls.run_compute_sum,
        )

    @staticmethod
    @calcfunction
    def compute_sum(x, y):
        return x + y

    def run_compute_sum(self):
        self.out('sum', self.compute_sum(self.inputs.x, self.inputs.y))

The function should be declared as a staticmethod and it should not include the self argument in its function signature. It can then be called from within the work chain as self.function_name(*args, **kwargs).

Scheduler plugins: including environment_variables

The Scheduler base class implements the concrete method _get_submit_script_environment_variables which formats the lines for the submission script that set the environment variables that were defined in the metadata.options.environment_variables input. Before it was left up to the plugins to actually call this method in the _get_submit_script_header, but this is now done by the base class in the get_submit_script. You can now remove the call to _get_submit_script_environment_variables from your scheduler plugins, as the base class will take care of it. A deprecation warning is emitted if the base class detects that the plugin is still calling it manually. See the pull request for more details.

WorkChain: conditional predicates should return boolean-like

Up till now, work chain methods that are used as the predicate in a conditional, e.g., if_ or while_ could return any type. For example:

class SomeWorkChain(WorkChain):

    @classmethod
    def define(cls, spec):
        super().define(spec)
        spec.outline(if_(cls.some_conditional)())

    def some_conditional(self):
        if self.ctx.something == 'something':
            return True

The some_conditional method is used as the "predicate" of the if_ conditional. It returns True or None. Since the None value in Python is "falsey", it would be considered as returning False. However, this duck-typing could accidentally lead to unexpected situations, so we decided to be more strict on the return type. As of now, a deprecation warning is emitted if the method returns anything that is not "boolean-like", i.e., does not implement the __bool__ method. If you see this warning, please make sure to return a boolean, like the built-ins True or False, or a numpy.bool or aiida.orm.Bool. See the pull request for more details.

Controlling usage of MPI

It is now possible to define on a code object whether it should be run with or without MPI through the with_mpi attribute. It can be set from the Python API as AbstractCode(with_mpi=with_mpi) or through the --with-mpi / --no-with-mpi option of the verdi code create CLI command. This option adds a manner to control the use of MPI in calculation jobs, in addition to the existing ones defined by the CalcJob plugin and the metadata.options.withmpi input. For more details on how these are controlled and how conflicts are handled, please refer to the documentation.

Add support for Docker containers

Support is added for running calculation within Docker containers. For example, to run Quantum ESPRESSO pw.x in a Docker container, write the following file to config.yml:

label: qe-pw-on-docker
computer: localhost
engine_command: docker run -i -v $PWD:/workdir:rw -w /workdir {image_name} sh -c
image_name: haya4kun/quantum_espresso
filepath_executable: pw.x
default_calc_job_plugin: quantumespresso.pw
use_double_quotes: false
wrap_cmdline_params: true

and run the CLI command:

verdi code create core.code.containerized --config config.yml --non-interactive

This should create a ContainerizedCode that you can now use to launch a PwCalculation. For more details, please refer to the documentation.

Exporting code configurations

It is now possible to export the configuration of an existing code through the verdi code export command. The produced YAML file can be used to recreate the code through the verdi code create command. Note that you should use the correct subcommand based on the type of the original code. For example, if it was an InstalledCode you should use verdi code create core.code.installed. For the legacy Code instances, you should use verdi code setup. See the pull request for more details.

Full list of changes

Features

  • AbstractCode: Add the with_mpi attribute [#5922]
  • ContainerizedCode: Add support for Docker images to use as Code for CalcJobs [#5841]
  • InstalledCode: Allow relative path for filepath_executable [#5879]
  • CLI: Allow specifying output filename in verdi node graph generate [#5897]
  • CLI: Add --timeout option to all verdi daemon commands [#5966]
  • CLI: Add the verdi calcjob remotecat command [#4861]
  • CLI: Add the verdi code export command [#5860]
  • CLI: Improved customizability and scriptability of verdi storage maintain [#5936]
  • CLI: verdi quicksetup: Further reduce required user interaction [#5768]
  • CLI: verdi computer test: Add test for login shell being slow [#5845]
  • CLI: verdi process list: Add exit_message as projectable attribute [#5853]
  • CLI: verdi node delete: Add verbose list of pks to be deleted [#5878]
  • CLI: Fail command if --config file contains unknown key [#5939]
  • CLI: verdi daemon status: Do not except when no profiles are defined [#5874]
  • ORM: Add unary operations +, - and abs to NumericType [#5946]
  • Process functions: Support class member functions as process functions [#4963]
  • Process functions: Infer argument valid_type from type hints [#5900]
  • Process functions: Parse docstring to set input port help attribute [#5919]
  • Process functions: Add support for variadic arguments [#5691]
  • Process functions: Allow nested output namespaces [#5954]
  • Process: Store JSON-serializable metadata inputs on the node [#5801]
  • Port: Add the is_metadata keyword [#5801]
  • ProcessBuilder: Include metadata inputs in get_builder_restart [#5801]
  • StructureData: Add mode argument to get_composition [#5926]
  • Scheduler: Allow terminating job if submission script is invalid [#5849]
  • SlurmScheduler: Detect broken submission scripts for invalid account [#5850]
  • SlurmScheduler: Parse the NODE_FAIL state [#5866]
  • WorkChain: Add dataclass serialisation to context [#5833]
  • IcsdDbImporter: Add is_theoretical tag to queried entries [#5868]

Fixes

  • CLI: Prefix the verdi data subcommands with core. [#5846]
  • CLI: Respect config log levels if --verbosity not explicitly passed [#5925]
  • CLI: verdi config list: Do not except if no profiles are defined [#5921]
  • CLI: verdi code show: Add missing code attributes [#5916]
  • CLI: verdi quicksetup: Fix error incorrect role when creating database [#5828]
  • CLI: Fix error in aiida.cmdline.utils.log.CliFormatter [#5957]
  • Daemon: Fix false-positive of stopped daemon in verdi daemon status [#5862]
  • DaemonClient: Fix and homogenize use of timeout in client calls [#5960]
  • ProcessBuilder: Fix bug in _recursive_merge [#5801]
  • QueryBuilder: Catch new exception raised by sqlalchemy>=1.4.45 [#5875]
  • Fix the %verdi IPython magics utility [#5961]
  • Fix bug in aiida.engine.utils.instantiate_process [#5952]
  • Fix incorrect import of exception from kiwipy.communications [#5947]

Deprecations

  • Scheduler: Move setting of environment variables into base class [#5948]
  • WorkChains: Emit deprecation warning if predicate if_/while_ does not return boolean-like [#5924]

Changes

  • DaemonClient: Refactor to include parsing of client response [#5850]
  • ORM: Remove Entity.from_backend_entity from the public API [#5447]
  • PbsproScheduler: Replace deprecated ppn tag with ncpus [#5910]
  • ProcessBuilder: Move _prune method to standalone utility [#5801]
  • verdi process list: Simplify the daemon load implementation [#5850]

Documentation

  • Add FAQ on MFA-enabled computers [#5887]
  • Add link to all metadata.options inputs in CalcJob submission example [#5912]
  • Add warning that Data constructor is not called on loading [#5898]
  • Add note on how to create a code that uses Conda environment [#5905]
  • Add --without-daemon flag to benchmark script [#5839]
  • Add alternative for conda env activation in submission script [#5950]
  • Clarify that process functions can be exposed in work chains [#5919]
  • Fix the intro/tutorial.md notebook [#5961]
  • Fix the overindentation of lists [#5915]
  • Hide the "Edit this page" button on the API reference pages [#5956]
  • Note that an entry point is required for using a data plugin [#5907]
  • Set use_login_shell=False for localhost in performance benchmark [#5847]
  • Small improvements to the benchmark script [#5854]
  • Use mamba instead of conda [#5891]

DevOps

  • Add devcontainer for easy integration with VSCode [#5913]
  • CI: Update sphinx-intl and install transifex CLI [#5908]
  • Fix the test-install workflow [#5873]
  • Pre-commit: Improve typing of aiida.schedulers.scheduler [#5849]
  • Pre-commit: Set yapf option allow_split_before_dict_value = false[#5931]
  • Process functions: Replace getfullargspec with signature [#5900]
  • Fixtures: Add argument use_subprocess to run_cli_command [#5846]
  • Fixtures: Change default use_subprocess=False for run_cli_command [#5846]
  • Tests: Use use_subprocess=False and suppress_warnings=True [#5846]
  • Tests: Fix bugs revealed by running with use_subprocess=True [#5846]
  • Typing: Annotate aiida/orm/utils/serialize.py [#5832]
  • Typing: Annotate aiida/tools/visualization/graph.py [#5821]
  • Typing: Use modern syntax for aiida.engine.processes.functions [#5900]

Dependencies

  • Add compatibility for ipython~=8.0 [#5888]
  • Bump cryptography from 36.0.0 to 39.0.1 [#5885]
  • Remove upper limit on werkzeug [#5904]
  • Update pre-commit requirement isort==5.12.0 [#5877]
  • Update requirement importlib-metadata~=4.13 [#5963]
  • Bump graphviz version to 0.19 [#5965]

New contributors

Thanks a lot to the following new contributors:

v2.2.2 - 2023-02-10

Fixes

  • Critical bug fix: Fix bug causing CalcJobs to except after restarting daemon [#5886]

v2.2.1 - 2022-12-22

Fixes

  • Critical bug fix: Revert the changes of PR [#5804] released with v2.2.0, which addressed a bug when mutating nodes during QueryBuilder.iterall. Unfortunately, the change caused changes performed by verdi commands (as well as changes made in verdi shell) to not be persisted to the database. [#5851]

v2.2.0 - 2022-12-13

This feature release comes with a significant feature and a number of improvements and fixes.

Live calculation job monitoring

In certain use cases, it is useful to have a calculation job stopped prematurely, before it finished or the requested wallclock time runs out. Examples are calculations that seem to be going nowhere and so continuing would only waste computational resources. Up till now, a calculation job could only be "manually" stopped, through verdi process kill. In this release, functionality is added that allows calculation jobs to be monitored automatically by the daemon and have them stopped when certain conditions are met.

Monitors can be attached to a calculation job through the monitors input namespace:

builder = load_code().get_builder()
builder.monitors = {
    'monitor_a': Dict({'entry_point': 'some.monitor'}),
    'monitor_b': Dict({'entry_point': 'some.other.monitor'}),
}

Monitors are referenced by their entry points with which they are registered in the aiida.calculations.monitors entry point group. A monitor is essentially a function that implements the following interface:

from aiida.orm import CalcJobNode
from aiida.transports import Transport

def monitor(node: CalcJobNode, transport: Transport) -> str | CalcJobMonitorResult | None:
    """Retrieve and inspect files in working directory of job to determine whether the job should be killed.

    :param node: The node representing the calculation job.
    :param transport: The transport that can be used to retrieve files from remote working directory.
    :returns: A string if the job should be killed, `None` otherwise.
    """

The transport allows to fetch files from the working directory of the calculation. If the job should be killed, the monitor simply returns a string with the message why and the daemon will send the message to kill the job.

For more information and a complete description of the interface, please refer to the documentation. This functionality was accepted based on AEP 008 which provides more detail on the design choices behind this implementation.

Full list of changes

Features

  • CalcJob: Add functionality that allows live monitoring [#5659]
  • CLI: Add --raw option to verdi code list [#5763]
  • CLI: Add the -h short-hand flag for --help to verdi [#5792]
  • CLI: Add short option names for verdi code create [#5799]
  • StorageBackend: Add the initialise method [#5760]
  • Fixtures: Add support for Process inputs to submit_and_await [#5780]
  • Fixtures: Add aiida_computer_local and aiida_computer_ssh [#5786]
  • Fixtures: Modularize fixtures creating AiiDA test instance and profile [#5758]
  • Computer: Add the is_configured property [#5786]
  • Plugins: Add aiida.storage to ENTRY_POINT_GROUP_FACTORY_MAPPING [#5798]

Fixes

  • verdi run: Do not add pathlib.Path instance to sys.path [#5810]
  • Process functions: Restore support for dynamic nested input namespaces [#5808]
  • Process: properly cleanup when exception in state transition [#5697]
  • Process: Update outputs before updating node process state [#5813]
  • PsqlDosMigrator: refactor the connection handling [#5783]
  • PsqlDosBackend: Use transaction whenever mutating session state, fixing exception when storing a node or group during QueryBuilder.iterall [#5804]
  • InstalledCode: Fix bug in validate_filepath_executable for SSH [#5787]
  • WorkChain: Protect public methods from being subclassed. Now if you accidentally override, for example, the run method of the WorkChain, an exception is raised instead of silently breaking the work chain [#5779]

Changes

  • Rename PsqlDostoreMigrator to PsqlDosMigrator [#5761]
  • ORM: Remove pymatgen version check in StructureData.set_pymatgen_structure [#5777]
  • StorageBackend: Remove recreate_user from _clear [#5772]
  • PsqlDosMigrator: Remove hardcoding of table name in database reset [#5781]

Dependencies

  • Dependencies: Add support for Python 3.11 [#5778]

Documentation

  • Docs: Correct command to enable verdi tab-completion for fish shell [#5784]
  • Docs: Fix transport & scheduler type in localhost setup [#5785]
  • Docs: Fix minor formatting issues in "How to run a code" [#5794]

DevOps

  • CI: Increase load limit for verdi to 0.5 seconds [#5773]
  • CI: Add workflow_dispatch trigger to nightly.yml [#5760]
  • ORM: Fix typing of aiida.orm.nodes.data.code module [#5830]
  • Pin version of setuptools as it breaks dependencies [#5782]
  • Tests: Use explicit aiida_profile_clean in process control tests [#5778]
  • Tests: Replace all use of aiida_profile_clean with aiida_profile where a clean profile is not necessary [#5814]
  • Tests: Deal with run_via_daemon returning None in RPN tests [#5813]
  • Make type-checking opt-out [#5811]

v2.1.2 - 2022-11-14

Fixes

  • BaseRestartWorkChain: Fix bug in _wrap_bare_dict_inputs introduced in v2.1.0 [#5757]

v2.1.1 - 2022-11-10

Fixes

  • Engine: Remove *args from the Process.submit method. [#5753] Positional arguments were silently ignored leading to a misleading error message. For example, if a user called
    inputs = {}
    self.submit(cls, inputs)
    instead of the intended
    inputs = {}
    self.submit(cls, **inputs)
    The returned error message was that one of the required inputs was not defined. Now it will correctly raise a TypeError saying that positional arguments are not supported.
  • Process functions: Add serialization for Python base type defaults [#5744] Defining Python base types as defaults, such as:
    @calcfunction
    def function(a, b = 5):
        return a + b
    would raise an exception. The default is now automatically serialized, just as an input argument would be upon function call.
  • Process control: Reinstate process status for paused/killed processes [#5754] Regression introduced in aiida-core==2.1.0 caused the message Killed through 'verdi process list' to no longer be set on the process_status of the node.
  • QueryBuilder: use a nested session in iterall and iterdict [#5736] Modifying entities yielded by QueryBuilder.iterall and QueryBuilder.iterdict would raise an exception, for example:
    for [node] in QueryBuilder().append(Node).iterall():
        node.base.extras.set('some', 'extra')

v2.1.0 - 2022-11-07

This feature release comes with a number of new features as well as quite a few fixes of bugs and stability issues. Further down you will find a complete list of changes, after a short description of some of the most important changes:

Automatic input serialization in calculation and work functions

The inputs to calcfunctions and workfunctions are now automatically converted to AiiDA data types if they are one of the basic Python types (bool, dict, Enum, float, int, list or str). This means that code that looked like:

from aiida.engine import calcfunction
from aiida.orm import Bool, Float, Int, Str

@calcfunction
def function(switch, threshold, count, label):
    ...

function(Bool(True), Float(0.25), Int(10), Str('some-label'))

can now be simplified to:

from aiida.engine import calcfunction
from aiida.orm import Bool, Float, Int, Str

@calcfunction
def function(switch, threshold, count, label):
    ...

function(True, 0.25, 10, 'some-label')

Improved interface for creating codes

The Code data plugin was a single class that served two different types of codes: "remote" codes and "local" codes. These names "remote" and "local" have historically caused a lot of confusion. Likewise, using a single class Code for both implementations also has led to confusing interfaces.

To address this issue, the functionality has been split into two new classes InstalledCode and PortableCode, that replace the "remote" and "local" code, respectively. The installed code represents an executable binary that is already pre-installed on some compute resource. The portable code represents a code (executable plus any additional required files) that are stored in AiiDA's storage and can be automatically transfered to any computer before being executed.

Creating a new instance of these new code types is easy:

from pathlib import Path
from aiida.orm import InstalledCode, PortableCode

installed_code = InstalledCode(
    label='installed-code',
    computer=load_computer('localhost'),
    filepath_executable='/usr/bin/bash'
)

portable_code = PortableCode(
    label='portable-code',
    filepath_files=Path('/some/path/code'),
    filepath_executable='executable.exe'
)

Codes can also be created through the new verdi command verdi code create. To specify the type of code to create, pass the corresponding entry point name as an argument. For example, to create a new installed code, invoke:

verdi code create core.code.installed

The options for each subcommand are automatically generated based on the code type, and so only options that are relevant to that code type will be prompted for.

The new code classes both subclass the aiida.orm.nodes.data.code.abstract.AbstractCode base class. This means that both InstalledCodes and PortableCodes can be used as the code input for CalcJobs without problems.

The old Code class remains supported for the time being as well, however, it is deprecated and will be remove at some point. The same goes for the verdi code setup command; please use verdi code create instead. Existing codes will be automatically migrated to either an InstalledCode or a PortableCode. It is strongly advised that you update any code that creates new codes to use these new plugin types.

Support for running code in containers

Support is added to run calculation jobs inside a container. A containerized code can be setup through the CLI:

verdi code create core.code.containerized \
    --label containerized \
    --image-name docker://alpine:3 \
    --filepath-executable /bin/sh \
    --engine-command "singularity exec --bind $PWD:$PWD {image_name}"

as well as through the API:

from aiida.orm import ContainerizedCode, load_computer
code = ContainerizedCode(
    computer=load_computer('some-computer')
    filepath_executable='/bin/sh'
    image_name='docker://alpine:3',
    engine_command='singularity exec --bind $PWD:$PWD {image_name}'
).store()

In the example above we use the Singularity containerization technology. For more information on what containerization programs are supported and how to configure them, please refer to the documentation.

Control daemon and processes from the API

Up till now, the daemon and live processes could only easily be controlled through verdi daemon and verdi process, respectively. In this release, modules are added to provide the same functionality through the Python API.

Daemon API

The daemon can now be started and stopped through the DaemonClient which can be obtained through the get_daemon_client utility function:

from aiida.engine.daemon.client import get_daemon_client
client = get_daemon_client()

By default, this will give the daemon client for the current default profile. It is also possible to explicitly specify a profile:

client = get_daemon_client(profile='some-profile')

The daemon can be started and stopped through the client:

client.start_daemon()
assert client.is_daemon_running
client.stop_daemon(wait=True)

Process API

The functionality of verdi process to play, pause and kill is now made available through the aiida.engine.process.control module. Processes can be played, paused or killed through the play_processes, pause_processes, and kill_processes, respectively. The processes to act upon are defined through their ProcessNode which can be loaded using load_node.

from aiida.engine.process import control

processes = [load_node(<PK1>), load_node(<PK2>)]

pause_processes(processes)  # Pause the processes
play_processes(processes)  # Play them again
kill_processes(processes)  # Kill the processes

Instead of specifying an explicit list of processes, the functions also take the all_entries keyword argument:

pause_processes(all_entries=True)  # Pause all running processes

REST API can serve multiple profiles

Before, a single REST API could only serve data of a single profile at a time. This limitation has been removed and a single REST API instance can now serve data from all profiles of an AiiDA instance. To maintain backwards compatibility, the new functionality needs to be explicitly enabled through the configuration:

verdi config set rest_api.profile_switching True

After the REST API is restarted, it will now accept the profile query parameter, for example:

http://127.0.0.1:5000/api/v4/computers?profile=some-profile-name

If the specified is already loaded, the REST API functions exactly as without profile switching enabled. If another profile is specified, the REST API will first switch profiles before executing the request.

If the profile parameter is specified in a request and the REST API does not have profile switching enabled, a 400 response is returned.

Pluginable data storage backends

Warning: this is beta functionality. It is now possible to implement custom storage backends to control where all data of an AiiDA profile is stored. To provide a data storage plugin, one should implement the aiida.orm.implementation.storage_backend.StorageBackend interface. The default implementation provided by aiida-core is the aiida.storage.psql_dos.backend.PsqlDosBackend which uses a PostgreSQL database for the provenance graph and a disk-objectstore container for repository files.

Storage backend plugins should be registered in the new entry point group aiida.storage. The default storage backend PsqlDosBackend has the core.psql_dos entry point name.

The storage backend to be used for a profile can be specified using the --db-backend option in verdi setup and verdi quicksetup. The entry point of the selected backend is stored in the storage.backend key of a profile configuration:

{
    "profiles": {
        "profile-name": {
            "PROFILE_UUID": "",
            "storage": {
                "backend": "core.psql_dos",
                "config": {}
            },
            "process_control": {},
            "default_user_email": "aiida@localhost",
            "test_profile": false
        },

}

At the moment, it is not quite clear if the abstract interface StorageBackend properly abstracts everything that is needed to implement any storage backend. For the time being then, it is advised to subclass the PsqlDosBackend and replace parts required for the use-case, such as just replacing the file repository implementation.

Full list of changes

Features

  • Process: Add hook to customize the process_label attribute [#5713]
  • Add the ContainerizedCode data plugin [#5667]
  • API: Add the aiida.engine.processes.control module [#5630]
  • PluginVersionProvider: Add support for entry point strings [#5662]
  • verdi setup: Add the --profile-uuid option [#5673]
  • Process control: Add the revive_processes method [#5677]
  • Process functions: Add the get_source_code_function method [#4554]
  • CLI: Improve the quality of verdi code list output [#5750]
  • CLI: Add the verdi devel revive command [#5677]
  • CLI: verdi process status --max-depth [#5727]
  • CLI: verdi setup/quicksetup store autofill user info early [#5729]
  • CLI: Add the devel launch-add command [#5733]
  • CLI: Make filename in verdi node repo cat optional for SinglefileData [#5747]
  • CLI: Add the verdi devel rabbitmq command group [#5718]
  • API: Add function to start the daemon [#5625]
  • BaseRestartWorkChain: add the get_outputs hook [#5618]
  • CalcJob: extend retrieve_list syntax with depth=None [#5651]
  • CalcJob: allow wildcards in stash.source_list paths [#5601]
  • Add global config option rest_api.profile_switching [#5054]
  • REST API: make the profile configurable as request parameter [#5054]
  • ProcessFunction: Automatically serialize Python base type inputs [#5688]
  • BaseRestartWorkChain: allow to override priority in handler_overrides [#5546]
  • ORM: add entry_point classproperty to Node and Group [#5437]
  • Add the aiida.storage entry point group [#5501]
  • Add the config option storage.sandbox [#5501]
  • Add the InstalledCode and PortableCode data plugins [#5510]
  • CLI: Add the verdi code create command group [#5510]
  • CLI: Add the DynamicEntryPointCommandGroup command group [#5510]
  • Add a client to connect to RabbitMQ Manamegement HTTP API [#5718]
  • LsfScheduler: add support for num_machines [#5153]
  • JobResource: add the accepts_default_memory_per_machine [#5642]
  • AbstractCode: add abstraction methods for command line parameters [#5664]
  • ArithmeticAddCalculation: Add the metadata.options.sleep input [#5663]
  • DaemonClient: add the get_env method [#5631]
  • Tests: Make daemon fixtures available to plugin packages [#5701]
  • verdi plugin list: Show which exit codes invalidate cache [#5710]
  • verdi plugin list: Show full help for input and output ports [#5711]

Fixes

  • ArrayData: replace nan and inf with None when dumping to JSON [#5613]
  • Archive: add missing migration of transport entry points [#5604]
  • BaseRestartWorkChain: fix handler_overrides ignoring enabled=False [#5598]
  • CLI: allow setting options for config without profiles [#5544]
  • CLI: normalize use of colors [#5547]
  • Config: fix bug in downgrade past version 6 [#5528]
  • DaemonClient: close CircusClient after call [#5631]
  • Engine: Do not call serializer for None values [#5694]
  • Engine: Do not let DuplicateSubcriberError except a Process [#5715]
  • ORM: raise when trying to pickle instance of Entity [#5549]
  • ORM: Return None in get_function_source_code instead of excepting [#5730]
  • Fix get_entry_point not raising even for duplicate entry points [#5531]
  • Fix: reference to command in message for verdi storage maintain [#5558]
  • Fix: is_valid_cache setter for ProcessNodes [#5583]
  • Fix exception when importing an archive into a profile with many nodes [#5740]
  • Profile: make definition of daemon filepaths dynamic [#5631]
  • Fixtures: Fix bug in reset of empty_config fixture [#5717]
  • PsqlDosBackend: ensure sqla sessions are garbage-collected on close [#5728]
  • TrajectoryData: Fix bug in get_step_data [#5734]
  • ProfileManager: restart daemon in clear_profile [#5751]

Changes

  • Mark relevant Process exit codes as invalidates_cache=True[#5709]
  • TemplatereplacerCalculation: Change exit codes to be in 300 range [#5709]
  • Add the prefix core. to all storage entry points [#5501]
  • CalcJob: Fully abstract interaction with AbstractCode in presubmit [#5666]
  • CLI: make label the default group list order in verdi group list [#5523]
  • Config: add migration to properly prefix storage backend [#5501]
  • Move query utils from aiida.cmdline to aiida.tools [#5630]
  • SandboxFolder: decouple the location from the profile [#5496]
  • TemplatereplacerDoublerParser: rename and generalize implementation [#5669]
  • Process: Allow None for input ports that are not required [#5722]

Dependencies

  • RabbitMQ: Remove support for v3.5 and older [#5718]
  • Relax wrapt requirement [#5607]
  • Set upper limit werkzeug<2.2 [#5606]
  • Update requirement click~=8.1 [#5504]

Deprecations

  • Deprecate Profile.repository_path [#5516]
  • Deprecate: verdi code setup and CodeBuilder [#5510]
  • Deprecate the method aiida.get_strict_version [#5512]
  • Remove use of legacy Code [#5510]

Documentation

  • Add section on basic performance benchmark with automated benchmark script [#5724]
  • Add -U flag to PostgreSQL database backup command [#5550]
  • Clarify excepted and killed calculations are not cached [#5525]
  • Correct snippet for workchain context nested keys [#5551]
  • Plugin package setup add PEP 621 example [#5626]
  • Remove note on disk space for caching [#5534]
  • Remove explicit release tag in Docker image name [#5671]
  • Remove example REST API extension with POST requests [#5737]
  • Resubmit a Process from a ProcessNode [#5579]

Devops

  • Add a notification for nightly workflow on fail [#5605]
  • CI: Remove --use-feature flag in pip install of CI [#5703]
  • Fixtures: Add started_daemon_client and stopped_daemon_client [#5631]
  • Fixtures: Add the entry_points fixture to dynamically add and remove entry points [#5745]
  • Refactor: Process extract CalcJob specific input handling from Process [#5539]
  • Refactor: remove unnecessary use of tempfile.mkdtemp [#5639]
  • Refactor: Remove internal use of various deprecated resources [#5716]
  • Refactor: Turn aiida.manage.external.rmq into a package [#5718]
  • Tests: remove legacy tests/utils/configuration.py [#5500]
  • Tests: fix the RPN work chains for the nightly build [#5529]
  • Tests: Manually stop daemon after verdi devel revive test [#5689]
  • Tests: Add verbose info if submit_and_wait times out [#5689]
  • Tests: Do not set default memory for localhost fixture [#5689]
  • Tests: Suppress RabbitMQ and developer version warnings [#5689]
  • Tests: Add the EntryPointManager exposed as entry_points fixture [#5656]
  • Tests: Only reset database connection at end of suite [#5641]
  • Tests: Suppress logging and warnings from temporary profile fixture [#5702]

v2.0.4 - 2022-09-22

Full changelog

Fixes

  • Engine: Fix bug that allowed non-storable inputs to be passed to process [#5532]
  • Engine: Fix bug when caching from process with nested outputs [#5538]
  • Archive: Fix bug in archive creation after packing of file repository [#5570]
  • QueryBuilder: apply escape \ in like and ilike for a sqlite backend, such as export archives [#5553]
  • QueryBuilder: Fix bug in distinct queries always projecting the first entity, even if not projected explicitly [#5654]
  • CalcJob: fix bug in local_copy_list provenance exclusion [#5648]
  • Repository.copy_tree: omit subdirectories from path when copying [#5648]
  • Docs: Add intersphinx aliases for __all__ imports. Now the shortcut imports can also be used in third-party packages (e.g. aiida.orm.nodes.node.Node as well as aiida.orm.Node) [#5657]

v2.0.3 - 2022-08-09

Full changelog

Update of the Dockerfile base image (aiidateam/aiida-prerequisites) to version 0.6.0.

v2.0.2 - 2022-07-13

Full changelog

Fixes

  • REST API: treat false as False in URL parsing [#5573]
  • REST API: add support for byte streams through a custom JSON encoder [#5576]

v2.0.1 - 2022-04-28

Full changelog

Dependencies

  • Fix incompatibility with click>=8.1 and require click==8.1 as a minimum by @sphuber in [#5504]

v2.0.0 - 2022-04-27

Full changelog

This release finalises the v2.0.0b1 changes.

Node namespace restructuring ♻️

:::{note} The restructuring is fully back-compatible, and existing methods/attributes will continue to work, until aiida-core v3.0.

Deprecations warnings are also currently turned off by default. To identify these deprecations in your code base (for example when running unit tests), activate the AIIDA_WARN_v3 environmental variable:

export AIIDA_WARN_v3=1

:::

The Node class (and thus its subclasses) has many methods and attributes in its public namespace. This has been noted as being a problem for those using auto-completion, since it makes it difficult to select suitable methods and attributes.

These methods/attributes have now been partitioned into "sub-namespaces" for specific purposes:

Node.base.attributes : Interface to the attributes of a node instance.

Node.base.caching : Interface to control caching of a node instance.

Node.base.comments : Interface for comments of a node instance.

Node.base.extras : Interface to the extras of a node instance.

Node.base.links : Interface for links of a node instance.

Node.base.repository : Interface to the file repository of a node instance.

:::{dropdown} Full list of re-naming

Current name New name
Collection Deprecated, use NodeCollection directly
add_comment Node.base.comments.add
add_incoming Node.base.links.add_incoming
attributes Node.base.attributes.all
attributes_items Node.base.attributes.items
attributes_keys Node.base.attributes.keys
check_mutability Node._check_mutability_attributes
clear_attributes Node.base.attributes.clear
clear_extras Node.base.extras.clear
clear_hash Node.base.caching.clear_hash
copy_tree Node.base.repository.copy_tree
delete_attribute Node.base.attributes.delete
delete_attribute_many Node.base.attributes.delete_many
delete_extra Node.base.extras.delete
delete_extra_many Node.base.extras.delete_many
delete_object Node.base.repository.delete_object
erase Node.base.repository.erase
extras Node.base.extras.all
extras_items Node.base.extras.items
extras_keys Node.base.extras.keys
get Deprecated, use Node.objects.get
get_all_same_nodes Node.base.caching.get_all_same_nodes
get_attribute Node.base.attributes.get
get_attribute_many Node.base.attributes.get_many
get_cache_source Node.base.caching.get_cache_source
get_comment Node.base.comments.get
get_comments Node.base.comments.all
get_extra Node.base.extras.get
get_extra_many Node.base.extras.get_many
get_hash Node.base.caching.get_hash
get_incoming Node.base.links.get_incoming
get_object Node.base.repository.get_object
get_object_content Node.base.repository.get_object_content
get_outgoing Node.base.links.get_outgoing
get_stored_link_triples Node.base.links.get_stored_link_triples
glob Node.base.repository.glob
has_cached_links Node.base.caching.has_cached_links
id Deprecated, use pk
is_created_from_cache Node.base.caching.is_created_from_cache
is_valid_cache Node.base.caching.is_valid_cache
list_object_names Node.base.repository.list_object_names
list_objects Node.base.repository.list_objects
objects collection
open Node.base.repository.open
put_object_from_file Node.base.repository.put_object_from_file
put_object_from_filelike Node.base.repository.put_object_from_filelike
put_object_from_tree Node.base.repository.put_object_from_tree
rehash Node.base.caching.rehash
remove_comment Node.base.comments.remove
repository_metadata Node.base.repository.metadata
repository_serialize Node.base.repository.serialize
reset_attributes Node.base.attributes.reset
reset_extras Node.base.extras.reset
set_attribute Node.base.attributes.set
set_attribute_many Node.base.attributes.set_many
set_extra Node.base.extras.set
set_extra_many Node.base.extras.set_many
update_comment Node.base.comments.update
validate_incoming Node.base.links.validate_incoming
validate_outgoing Node.base.links.validate_outgoing
validate_storability Node._validate_storability
verify_are_parents_stored Node._verify_are_parents_stored
walk Node.base.repository.walk

:::

IPython integration improvements 👌

The aiida IPython magic commands are now available to load via:

%load_ext aiida

As well as the previous %aiida magic command, to load a profile, one can also use the %verdi magic command. This command runs the verdi CLI using the currently loaded profile of the IPython/Jupyter session.

%verdi status

See the Basic Tutorial for example usage.

New SqliteTempBackend

The SqliteTempBackend utilises an in-memory SQLite database to store data, allowing it to be transiently created/destroyed within a single Python session, without the need for Postgresql.

As such, it is useful for demonstrations and testing purposes, whereby no persistent storage is required.

To load a temporary profile, you can use the following code:

from aiida import load_profile
from aiida.storage.sqlite_temp import SqliteTempBackend

profile = load_profile(
    SqliteTempBackend.create_profile(
        'myprofile',
        options={
            'runner.poll.interval': 1
        },
        debug=False
    ),
)

See the Basic Tutorial for example usage.

Key Pull Requests

Below is a list of some key pull requests that have been merged into version 2.0.0:

  • Node namespace re-structuring:

    • 🔧 MAINTAIN: Add warn_deprecation function, Node.base, and move NodeRepositoryMixin -> NodeRepository by @chrisjsewell in #5472
    • ♻️ REFACTOR: EntityAttributesMixin -> NodeAttributes by @chrisjsewell in #5442
    • ♻️ REFACTOR: Move methods to Node.comments by @chrisjsewell in #5446
    • ♻️ REFACTOR: EntityExtrasMixin -> EntityExtras by @chrisjsewell in #5445
    • ♻️ REFACTOR: Move link related methods to Node.base.links by @sphuber in #5480
    • ♻️ REFACTOR: Move caching related methods to Node.base.caching by @sphuber in #5483
  • Storage:

    • ✨ NEW: Add SqliteTempBackend by @chrisjsewell in #5448
    • 👌 IMPROVE: Move default user caching to StorageBackend by @chrisjsewell in #5460
    • 👌 IMPROVE: Add JSON filtering for SQLite backends by @chrisjsewell in #5448
  • ORM:

    • 👌 IMPROVE: StructureData: allow to be initialised without a specified cell by @ltalirz in #5341
  • Processing:

    • 👌 IMPROVE: Allow engine.run to work without RabbitMQ by @chrisjsewell in #5448
    • 👌 IMPROVE: JobTemplate: change CodeInfo to JobTemplateCodeInfo in codes_info by @unkcpz in #5350
      • This is required for a containerized code implementation
    • 👌 IMPROVE: Add option to use double quotes for Code and Computer CLI arguments by @unkcpz in #5478
  • Transport and Scheduler:

    • 👌 IMPROVE: SlurmScheduler: Parse out-of-walltime and out-of-memory errors from stderr by @sphuber in #5458
    • 👌 IMPROVE: CalcJob: always call Scheduler.parse_output by @sphuber in #5458
    • 👌 IMPROVE: Computer: fallback on transport for get_minimum_job_poll_interval default by @sphuber in #5457
  • IPython:

    • ✨ NEW: Add %verdi IPython magic by @chrisjsewell in #5448
  • Dependencies:

    • ♻️ REFACTOR: drop the python-dateutil library by @sphuber

(release/2.0.0b1)=

v2.0.0b1 - 2022-03-15

Full changelog

The version 2 release of aiida-core largely focusses on major improvements to the design of data storage within AiiDA, as well as updates to core dependencies and removal of deprecated APIs.

Assuming users have already addressed deprecation warnings from aiida-core v1.6.x, there should be limited impact on existing code. For plugin developers, the AiiDA 2.0 plugin migration guide provides a step-by-step guide on how to update their plugins.

For existing profiles and archives, a migration will be required, before they are compatible with the new version.

:::{tip} Before updating your aiida-core installation, it is advisable to make sure you create a full backup of your profiles, using the current version of aiida-core you have installed. For backup instructions, using aiida-core v1.6.7, see this documentation. :::

Python support updated to 3.8 - 3.10 ⬆️

Following the NEP 029 timeline, support for Python 3.7 is dropped as of December 26 2021, and support for Python 3.10 is added.

Plugin entry point updates 🧩

AiiDA's use of entry points, to allow plugins to extend the functionality of AiiDA, is described in the plugins topic section.

The use of reentry scan, for loading plugin entry points, is no longer necessary.

Use of the reentry dependency has been replaced by the built-in importlib.metadata library. This library requires no additional loading step.

All entry points provided by aiida-core now start with a core. prefix, to make their origin more explicit and respect the naming guidelines of entry points in the AiiDA ecosystem. The old names are still supported so as to not suddenly break existing code based on them, but they have now been deprecated. For example:

from aiida.plugins import DataFactory
Int = DataFactory('int')  # Old name
Int = DataFactory('core.int')  # New name

Note that entry point names are also used on the command line. For example:

$ verdi computer setup -L localhost -T local -S direct
# now changed to
$ verdi computer setup -L localhost -T local -S core.direct

Improvements to the AiiDA storage architecture ♻️

Full details on the AiiDA storage architecture are available in the storage architecture section.

The storage refactor incorporates four major changes:

  • The django and sqlalchemy storage backends have been merged into a single psql_dos backend (PostgreSQL + Disk-Objectstore).

  • The file system node repository has been replaced with an object store implementation.

    • The object store automatically deduplicates files, and allows for the compression of many objects into a single file, thus significantly reducing the number of files on the file system and memory utilisation (by orders of magnitude).
    • Note, to make full use of object compression, one should periodically run verdi storage maintain.
    • See the repository design section for details.
  • Command-line interaction with a profile's storage has been moved from verdi database to verdi storage.

  • The AiiDA archive format has been redesigned as the sqlite_zip storage backend.

    • See the sqlite_zip storage format for details.
    • The new format allows for streaming of data during exports and imports, significantly reducing both the time and memory utilisation of these actions.
    • The archive can now be loaded directly as a (read-only) profile, without the need to import it first, see this Jupyter Notebook tutorial.

The storage redesign also allows for profile switching, within the same Python process, and profile access within a context manager. For example:

from aiida import load_profile, profile_context, orm

with profile_context('my_profile_1'):
    # The profile will be loaded within the context
    node_from_profile_1 = orm.load_node(1)
    # then the profile will be unloaded automatically

# load a global profile
load_profile('my_profile_2')
node_from_profile_2 = orm.load_node(1)

# switch to a different global profile
load_profile('my_profile_3', allow_switch=True)
node_from_profile_3 = orm.load_node(1)

See How to interact with AiiDA for more details.

On first using aiida-core v2.0, your AiiDA configuration will be automatically migrated to the new version (this can be reverted by verdi config downgrade). To update existing profiles and archives to the new storage formats, simply use verdi storage migrate and verdi archive migrate, respectively.

:::{important} The migration of large storage repositories is a potentially time-consuming process. It may take several hours to complete, depending on the size of the repository. It is also advisable to make a full manual backup of any AiiDA setup with important data: see the installation management section for more information.

See also this testing of profile migrations, for some indicative timings. :::

Improvements to the AiiDA ORM 👌

Node repository

Inline with the storage improvements, {class}~aiida.orm.Node methods associated with the repository have some backwards incompatible changes:

:::{dropdown} Node repository method changes

Altered:

  • FileType: moved from aiida.orm.utils.repository to aiida.repository.common
  • File: moved from aiida.orm.utils.repository to aiida.repository.common
  • File: changed from namedtuple to class
  • File: can no longer be iterated over
  • File: type attribute was renamed to file_type
  • Node.put_object_from_tree: path argument was renamed to filepath
  • Node.put_object_from_file: path argument was renamed to filepath
  • Node.put_object_from_tree: key argument was renamed to path
  • Node.put_object_from_file: key argument was renamed to path
  • Node.put_object_from_filelike: key argument was renamed to path
  • Node.get_object: key argument was renamed to path
  • Node.get_object_content: key argument was renamed to path
  • Node.open: key argument was renamed to path
  • Node.list_objects: key argument was renamed to path
  • Node.list_object_names: key argument was renamed to path
  • SinglefileData.open: key argument was renamed to path
  • Node.open: can no longer be called without context manager
  • Node.open: only mode r and rb are supported, use put_object_from_ methods instead
  • Node.get_object_content: only mode r and rb are supported
  • Node.put_object_from_tree: argument contents_only was removed
  • Node.put_object_from_tree: argument force was removed
  • Node.put_object_from_file: argument force was removed
  • Node.put_object_from_filelike: argument force was removed
  • Node.delete_object: argument force was removed

Added:

  • Node.walk
  • Node.copy_tree
  • Node.is_valid_cache setter
  • Node.objects.iter_repo_keys

Additionally, Node.open should always be used as a context manager, for example:

with node.open('filename.txt') as handle:
    content = handle.read()

:::

QueryBuilder

When using the {class}~aiida.orm.QueryBuilder to query the database, the following changes have been made:

  • The Computer's name field is now replaced with label (as previously deprecated in v1.6)
  • The QueryBuilder.queryhelp attribute is deprecated, for the as_dict (and from_dict) methods
  • The QueryBuilder.first method now allows the flat argument, which will return a single item, instead of a list of one item, if only a single projection is defined.

For example:

from aiida.orm import QueryBuilder, Computer
query = QueryBuilder().append(Computer, filters={'label': 'localhost'}, project=['label']).as_dict()
QueryBuilder.from_dict(query).first(flat=True)  # -> 'localhost'

For further information, see How to find and query for data.

Dict usage

The {class}~aiida.orm.Dict class has been updated to support more native dict behaviour:

  • Initialisation can now use Dict({'a': 1}), instead of Dict(dict={'a': 1}). This is also the case for List([1, 2]).
  • Equality (==/!=) comparisons now compare the dictionaries, rather than the UUIDs
  • The contains (in) operator now returns True if the dictionary contains the key
  • The items method iterates a list of (key, value) pairs

For example:

from aiida.orm import Dict

d1 = Dict({'a': 1})
d2 = Dict({'a': 1})

assert d1.uuid != d2.uuid
assert d1 == d2
assert not d1 != d2

assert 'a' in d1

assert list(d1.items()) == [('a', 1)]

New data types

Two new built-in data types have been added:

{class}~aiida.orm.EnumData : A data plugin that wraps a Python enum.Enum instance.

{class}~aiida.orm.JsonableData : A data plugin that allows one to easily wrap existing objects that are JSON-able (via an as_dict method).

See the data types section for more information.

Improvements to the AiiDA process engine 👌

CalcJob API

A number of minor improvements have been made to the CalcJob API:

  • Both numpy arrays and Enum instances can now be serialized on process checkpoints.
  • The Calcjob.spec.metadata.options.rerunnable option allows to specify whether the calculation can be rerun or requeued (dependent on the scheduler). Note, this should only be applied for idempotent codes.
  • The Calcjob.spec.metadata.options.environment_variables_double_quotes option allows for double-quoting of environment variable declarations. In particular, this allows for use of the $ character in the environment variable name, e.g. export MY_FILE="$HOME/path/my_file".
  • CalcJob.local_copy_list now allows for specifying entire directories to be copied to the local computer, in addition to individual files. Note that the directory itself won't be copied, just its contents.
  • WorkChain.to_context now allows . delimited namespacing, which generate nested dictionaries. See Nested context keys for more information.

Importing existing computations

The new CalcJobImporter class has been added, to define importers for computations completed outside of AiiDA. These can help onboard new users to your AiiDA plugin. For more information, see Writing importers for existing computations.

Scheduler plugins

Plugin's implementation of Scheduler._get_submit_script_header should now utilise Scheduler._get_submit_script_environment_variables, to format environment variable declarations, rather than handling it themselves. See the exemplar changes in #5283.

The Scheduler.get_valid_transports() method has also been removed, use get_entry_point_names('aiida.schedulers') instead (see {func}~aiida.plugins.entry_point.get_entry_point_names).

See Scheduler plugins for more information.

Transport plugins

The SshTransport now supports the SSH ProxyJump option, for tunnelling through other SSH hosts. See How to setup SSH connections for more information.

Transport plugins now support also transferring bytes (rather than only Unicode strings) in the stdout/stderr of "remote" commands (see #3787). The required changes for transport plugins:

  • rename the exec_command_wait function in your plugin implementation with exec_command_wait_bytes
  • ensure the method signature follows {meth}~aiida.transports.transport.Transport.exec_command_wait_bytes, and that stdin accepts a bytes object.
  • return bytes for stdout and stderr (most probably internally you are already getting bytes - just do not decode them to strings)

For an exemplar implementation, see {meth}~aiida.transports.plugins.local.LocalTransport.exec_command_wait_bytes, or see Transport plugins for more information.

The Transport.get_valid_transports() method has also been removed, use get_entry_point_names('aiida.transports') instead (see {func}~aiida.plugins.entry_point.get_entry_point_names).

Improvements to the AiiDA command-line 👌

The AiiDA command-line interface (CLI) can now be accessed as both verdi and /path/to/bin/python -m aiida.

The underlying dependency for this CLI, click, has been updated to version 8, which contains built-in tab-completion support, to replace the old click-completion. The completion works the same, except that the string that should be put in the activation script to enable it is now shell-dependent. See Activating tab-completion for more information.

Logging for the CLI has been updated, to standardise its use across all CLI commands. This means that all commands include the option:

  -v, --verbosity [notset|debug|info|report|warning|error|critical]
                                  Set the verbosity of the output.

By default the verbosity is set to REPORT (see verdi config list), which relates to using Logger.report, as defined in {func}~aiida.common.log.report.

The following specific changes and improvements have been made to the CLI commands:

verdi storage (replaces verdi database) : This command group replaces the verdi database command group, which is now deprecated, in order to represent its interaction with the full profile storage (not just database). : verdi storage info provides information about the entities contained for a profile. : verdi storage maintain has also been added, to allow for maintenance of the storage, for example, to optimise the storage size.

verdi archive version and verdi archive info (replace verdi archive inspect) : This change synchronises the commands with the new verdi storage version and verdi storage info commands.

verdi group move-nodes : This command moves nodes from a source group to a target group (removing them from one and adding them to the other).

verdi code setup : There is a small change to the order of prompts, in interactive mode. : The uniqueness of labels is now validated, for both remote and local codes.

verdi code test : Run tests for a given code to check whether it is usable, including whether remote executable files are available.

See AiiDA Command Line for more information.

Development improvements

The build tool for aiida-core has been changed from setuptools to flit. This allows for the project metadata to be fully specified in the pyproject.toml file, using the PEP 621 format. Note, editable installs (using the -e flag for pip install) of aiida-core now require pip>=21.

Type annotations have been added to most of the code base. Plugin developers can use mypy to check their code against the new type annotations.

All module level imports are now defined explicitly in __all__. See Overview of public API for more information.

The aiida.common.json module is now deprecated. Use the json standard library instead.

Changes to the plugin test fixtures 🧪

The deprecated AiidaTestCase class has been removed, in favour of the AiiDA pytest fixtures, which can be loaded in your conftest.py using:

pytest_plugins = ['aiida.manage.tests.pytest_fixtures']

The fixtures clear_database, clear_database_after_test, clear_database_before_test are now deprecated, in favour of the aiida_profile_clean fixture, which ensures (before the test) the default profile is reset with clean storage, and that all previous resources are closed If you only require the profile to be reset before a class of tests, then you can use aiida_profile_clean_class.

Key Pull Requests

Below is a list of some key pull requests that have been merged into version 2.0.0b1:

  • Storage and migrations:

    • ♻️ REFACTOR: Implement the new file repository by @sphuber in #4345
    • ♻️ REFACTOR: New archive format by @chrisjsewell in #5145
    • ♻️ REFACTOR: Remove QueryManager by @chrisjsewell in #5101
    • ♻️ REFACTOR: Fully abstract QueryBuilder by @chrisjsewell in #5093
    • ✨ NEW: Add Backend bulk methods by @chrisjsewell in #5171
    • ⬆️ UPDATE: SQLAlchemy v1.4 (v2 API) by @chrisjsewell in #5103 and #5122
    • 👌 IMPROVE: Configuration migrations by @chrisjsewell in #5319
    • ♻️ REFACTOR: Remove Django storage backend by @chrisjsewell in #5330
    • ♻️ REFACTOR: Move archive backend to aiida/storage by @chrisjsewell in 5375
    • 👌 IMPROVE: Use sqlalchemy.func for JSONB QB filters by @ltalirz in #5393
    • ✨ NEW: Add Mechanism to lock profile access by @ramirezfranciscof in #5270
    • ✨ NEW: Add verdi storage CLI by @ramirezfranciscof in #4965 and #5156
  • ORM API:

    • ♻️ REFACTOR: Add the core. prefix to all entry points by @sphuber in #5073
    • 👌 IMPROVE: Replace InputValidationError with ValueError and TypeError by @sphuber in #4888
    • 👌 IMPROVE: Add Node.walk method to iterate over repository content by @sphuber in #4935
    • 👌 IMPROVE: Add Node.copy_tree method by @sphuber in #5114
    • 👌 IMPROVE: Add Node.is_valid_cache setter property by @sphuber in #5114
    • 👌 IMPROVE: Add Node.objects.iter_repo_keys by @chrisjsewell in #5114
    • 👌 IMPROVE: Allow storing Decimal in Node.attributes by @dev-zero in #4964
    • 🐛 FIX: Initialising a Node with a User by @chrisjsewell in #5114
    • 🐛 FIX: Deprecate double underscores in LinkManager contains by @sphuber in #5067
    • ♻️ REFACTOR: Rename name field of Computer to label by @sphuber in #4882
    • ♻️ REFACTOR: QueryBuilder.queryhelp -> QueryBuilder.as_dict by @chrisjsewell in #5081
    • 👌 IMPROVE: Add AuthInfo joins to QueryBuilder by @chrisjsewell in #5195
    • 👌 IMPROVE: QueryBuilder.first add flat keyword by @sphuber in #5410
    • 👌 IMPROVE: Add Computer.default_memory_per_machine attribute by @yakutovicha in #5260
    • 👌 IMPROVE: Add Code.validate_remote_exec_path method to check executable by @sphuber in #5184
    • 👌 IMPROVE: Allow source to be passed as a keyword to Data.__init__ by @sphuber in #5163
    • 👌 IMPROVE: Dict.__init__ and List.__init__ by @mbercx in #5165
    • ‼️ BREAKING: Compare Dict nodes by content by @mbercx in #5251
    • 👌 IMPROVE: Implement the Dict.__contains__ method by @sphuber in #5251
    • 👌 IMPROVE: Implement Dict.items() method by @mbercx in #5251
    • 🐛 FIX: BandsData.show_mpl allow NaN values by @PhilippRue in #5024
    • 🐛 FIX: Replace KeyError with AttributeError in TrajectoryData methods by @Crivella in #5015
    • ✨ NEW: EnumData data plugin by @sphuber in #5225
    • ✨ NEW: JsonableData data plugin by @sphuber in #5017
    • 👌 IMPROVE: Register List class with to_aiida_type dispatch by @sphuber in #5142
    • 👌 IMPROVE: Register EnumData class with to_aiida_type dispatch by @sphuber in #5314
  • Processing:

    • ✨ NEW: CalcJob.get_importer() to import existing calculations, run outside of AiiDA by @sphuber in #5086
    • ✨ NEW: ProcessBuilder._repr_pretty_ ipython representation by @mbercx in #4970
    • 👌 IMPROVE: Allow Enum types to be serialized on ProcessNode.checkpoint by @sphuber in #5218
    • 👌 IMPROVE: Allow numpy arrays to be serialized on ProcessNode.checkpoint by @greschd in #4730
    • 👌 IMPROVE: Add Calcjob.spec.metadata.options.rerunnable to requeue/rerun calculations by @greschd in #4707
    • 👌 IMPROVE: Add Calcjob.spec.metadata.options.environment_variables_double_quotes to escape environment variables by @unkcpz in #5349
    • 👌 IMPROVE: Allow directories in CalcJob.local_copy_list by @sphuber in #5115
    • 👌 IMPROVE: Add support for . namespacing in the keys for WorkChain.to_context by @dev-zero in #4871
    • 👌 IMPROVE: Handle namespaced outputs in BaseRestartWorkChain by @unkcpz in #4961
    • 🐛 FIX: Nested namespaces in ProcessBuilderNamespace by @sphuber in #4983
    • 🐛 FIX: Ensure ProcessBuilder instances do not interfere by @sphuber in #4984
    • 🐛 FIX: Raise when Process.exposed_outputs gets non-existing namespace by @sphuber in #5265
    • 🐛 FIX: Catch AttributeError for unloadable identifier in ProcessNode.is_valid_cache by @sphuber in #5222
    • 🐛 FIX: Handle CalcInfo.codes_run_mode when CalcInfo.codes_info contains multiple codes by @unkcpz in #4990
    • 🐛 FIX: Check for recycled circus PID by @dev-zero in #5086
  • Scheduler/Transport:

    • 👌 IMPROVE: Specify abstract methods on Transport by @chrisjsewell in #5242
    • ✨ NEW: Add support for SSH proxy_jump by @dev-zero in #4951
    • 🐛 FIX: Daemon hang when passing None as job_id by @ramirezfranciscof in #4967
    • 🐛 FIX: Avoid deadlocks when retrieving stdout/stderr via SSH by @giovannipizzi in #3787
    • 🐛 FIX: Use sanitised variable name in SGE scheduler job title by @mjclarke94 in #4994
    • 🐛 FIX: listdir method with pattern for SSH by @giovannipizzi in #5252
    • 👌 IMPROVE: DirectScheduler: use num_cores_per_mpiproc if defined in resources by @sphuber in #5126
    • 👌 IMPROVE: Add abstract generation of submit script env variables to Scheduler by @sphuber in #5283
  • CLI:

    • ✨ NEW: Allow for CLI usage via python -m aiida by @chrisjsewell in #5356
    • ⬆️ UPDATE: click==8.0 and remove click-completion by @sphuber in #5111
    • ♻️ REFACTOR: Replace verdi database commands with verdi storage by @ramirezfranciscof in #5228
    • ✨ NEW: Add verbosity control by @sphuber in #5085
    • ♻️ REFACTOR: Logging verbosity implementation by @sphuber in #5119
    • ✨ NEW: Add verdi group move-nodes command by @mbercx in #4428
    • 👌 IMPROVE: verdi code setup: validate the uniqueness of label for local codes by @sphuber in #5215
    • 👌 IMPROVE: GroupParamType: store group if created by @sphuber in #5411
    • 👌 IMPROVE: Show #procs/machine in verdi computer show by @dev-zero in #4945
    • 👌 IMPROVE: Notify users of runner usage in verdi process list by @ltalirz in #4663
    • 👌 IMPROVE: Set localhost as default for database hostname in verdi setup by @sphuber in #4908
    • 👌 IMPROVE: Make verdi group messages consistent by @CasperWA in #4999
    • 🐛 FIX: verdi calcjob cleanworkdir command by @zhubonan in #5209
    • 🔧 MAINTAIN: Add verdi devel run-sql by @chrisjsewell in #5094
  • REST API:

    • ⬆️ UPDATE: Update to flask~=2.0 for rest extra by @sphuber in #5366
    • 👌 IMPROVE: Error message when flask not installed by @ltalirz in #5398
    • 👌 IMPROVE: Allow serving of contents of ArrayData by @JPchico in #5425
    • 🐛 FIX: REST API date-time query by @NinadBhat in #4959
  • Developers:

    • 🔧 MAINTAIN: Move to flit for PEP 621 compliant package build by @chrisjsewell in #5312
    • 🔧 MAINTAIN: Make __all__ imports explicit by @chrisjsewell in #5061
    • 🔧 MAINTAIN: Add pre-commit.ci by @chrisjsewell in #5062
    • 🔧 MAINTAIN: Add isort pre-commit hook by @chrisjsewell in #5151
    • ⬆️ UPDATE: Drop support for Python 3.7 by @sphuber in #5307
    • ⬆️ UPDATE: Support Python 3.10 by @csadorf in #5188
    • ♻️ REFACTOR: Remove reentry requirement by @chrisjsewell in #5058
    • ♻️ REFACTOR: Remove simplejson by @sphuber in #5391
    • ♻️ REFACTOR: Remove ete3 dependency by @ltalirz in #4956
    • 👌 IMPROVE: Replace deprecated imp with importlib by @DirectriX01 in #4848
    • ⬆️ UPDATE: sphinx~=4.1 (+ sphinx extensions) by @chrisjsewell in #5420
    • 🧪 CI: Move time consuming tests to separate nightly workflow by @sphuber in #5354
    • 🧪 TESTS: Entirely remove AiidaTestCase by @chrisjsewell in #5372

Contributors 🎉

Thanks to all contributors: Contributor Graphs

Including first-time contributors:

  • @DirectriX01 made their first contribution in [#4848]
  • @mjclarke94 made their first contribution in [#4994]
  • @janssenhenning made their first contribution in [#5064]

v1.6.7 - 2022-03-07

full changelog

The markupsafe dependency specification was moved to install_requires

v1.6.6 - 2022-03-07

full changelog

Bug fixes 🐛

  • DirectScheduler: remove the -e option for bash invocation [#5264]
  • Replace deprecated matplotlib config option 'text.latex.preview' [#5233]

Dependencies

  • Add upper limit markupsafe<2.1 to fix the documentation build [#5371]
  • Add upper limit pytest-asyncio<0.17 [#5309]

Devops 🔧

  • CI: move Jenkins workflow to nightly GHA workflow [#5277]
  • Docs: replace CircleCI build with ReadTheDocs [#5279]
  • CI: run certain workflows only on main repo, not on forks [#5091]
  • Revise Docker image build [#4997]

v1.6.5 - 2021-08-13

full changelog

This patch release contains a number of helpful bug fixes and improvements.

Improvements 👌

  • Add support for the ProxyJump SSH config option for seting up an arbitrary number of proxy jumps without additional processes by creating TCP channels over existing SSH connections. This provides improved control over the lifetime of the different connections. See SSH configuration for further details. [#4951]
  • Allow numpy arrays to be serialized to a process checkpoint. [#4730)])
  • Add the _merge method to ProcessBuilder, to update the builder with a nested dictionary. [#4983)])
  • verdi setup: Set the defaut database hostname as localhost. [#4908]
  • Allow Node.__init__ to be constructed with a specific User node. [#4977]
  • Minimize database logs of failed schema version retrievals. [#5056]
  • Remove duplicate call of normal callback for InteractiveOption. [#5064]
  • Update requirement pyyaml~=5.4, which contains critical security fixes. [#5060]

Bug Fixes 🐛

  • Fix regression issue with __contains__ operator in LinkManager, when using double underscores, e.g. for 'some__nested__namespace' in calc.inputs. #5067
  • Stop deprecation warning being shown when tab-completing incoming and outgoing node links. [#5011]
  • Stop possible command hints being shown when attempting to tab complete verdi commands that do not exist. [#5012]
  • Do not use get_detailed_job_info when retrieving a calculation job, if no job id is set. [#4967]
  • Race condition when two processes try to create the same Folder/SandboxFolder, [#4912]
  • Return the whole nested namespace when using BaseRestartWorkChain.result. [#4961]
  • Use numpy.nanmin and numpy.nanmax for computing y-limits of BandsData matplotlib methods. [#5024]
  • Use sanitized job title with SgeScheduler scheduler. [#4994]

v1.6.4 - 2021-06-23

full changelog

This is a patch release to pin psycopg2-binary to version 2.8.x, to avoid an issue with database creation in version 2.9 (#4989).

v1.6.3 - 2021-04-28

full changelog | GitHub contributors page for this release

This is a patch release to fix a bug that was introduced in v1.6.2 that would cause a number of verdi commands to fail due to a bug in the with_dbenv decorator utility.

Bug fixes

  • Fix aiida.cmdline.utils.decorators.load_backend_if_not_loaded [#4878]

v1.6.2 - 2021-04-28

full changelog | GitHub contributors page for this release

Bug fixes

  • CLI: Use the proper proxy command for verdi calcjob gotocomputer if configured as such [#4761]
  • Respect nested output namespaces in Process.exposed_outputs [#4863]
  • NodeLinkManager now properly regenerates original nested namespaces from the flat link labels stored in the database. This means one can now do node.outputs.some.nested.output instead of having to do node.outputs.some__nested__output. The same goes for node.inputs [#4625]
  • Fix aiida.cmdline.utils.decorators.with_dbenv always loading the database. Now it will only load the database if not already loaded, as intended [#4865]

Features

  • Add the account option to the LsfScheduler scheduler plugin [#4832]

Documentation

  • Update ssh proxycommand section with instructions on how to handle cases where the SSH key needs to be specified for the proxy server [#4839]
  • Add the "How to extend workflows" section, explaining the use of the expose_inputs and expose_outputs features, as well as nested namespaces [#4562]
  • Add help in intro for when quicksetup fails due to problems autodetecting the PostgreSQL settings [#4838]

v1.6.1 - 2021-03-31

full changelog | GitHub contributors page for this release

This patch release is primarily intended to fix a regression in the aiida_profile test fixture, used by plugin developers, causing config validation errors (#4831).

Other additions:

  • ✨ NEW: Added structure.data.import entry-point, allowing for plugins to define file-format specific sub-commands of verdi data structure import (#4427).
  • ✨ NEW: Added --label and --group options to verdi data structure import, which apply a label/group to all structures being imported (#4429).
  • ⬆️ UPDATE: psgu dependency increased to v0.2.x. This fixes a bug in verdi quicksetup, when used on the Windows Subsystem for Linux (WSL) platform (#4834).
  • 🐛 FIX: metadata.options.max_memory_kb is now ignored when using the direct scheduler (#4825). This was previously imposing a a virtual memory limit with ulimit -v, which is very different to the physical memory limit that other scheduler plugins impose. No straightforward way exists to directly limit the physical memory usage for this scheduler.
  • 🐛 FIX: Added __str__ method to the Orbital class, fixing a recursion error (#4829).

v1.6.0 - 2021-03-15

full changelog | GitHub contributors page for this release

As well as introducing a number of improvements and new features listed below, this release marks the "under-the-hood" migration from the tornado package to the Python built-in module asyncio, for handling asynchronous processing within the AiiDA engine. This removes a number of blocking dependency version clashes with other tools, in particular with the newest Jupyter shell and notebook environments. The migration does not present any backward incompatible changes to AiiDA's public API. A substantial effort has been made to test and debug the new implementation, and ensure it performs at least equivalent to the previous code (or improves it!), but please let us know if you uncover any additional issues.

This release also drops support for Python 3.6 (testing is carried out against 3.7, 3.8 and 3.9).

NOTE: v1.6 is tentatively intended to be the final minor v1.x release before v2.x, that will include a new file repository implementation and remove all deprecated code.

New calculation features ✨

The additional_retrieve_list metadata option has been added to CalcJob (#4437). This new option allows one to specify additional files to be retrieved on a per-instance basis, in addition to the files that are already defined by the plugin to be retrieved.

A new namespace stash has bee added to the metadata.options input namespace of the CalcJob process (#4424). This option namespace allows a user to specify certain files that are created by the calculation job to be stashed somewhere on the remote. This can be useful if those files need to be stored for a longer time than the scratch space (where the job was run) is available for, but need to be kept on the remote machine and not retrieved. Examples are files that are necessary to restart a calculation but are too big to be retrieved and stored permanently in the local file repository.

See Stashing files on the remote for more details.

The new TransferCalcjob plugin (#4194) allows the user to copy files between a remote machine and the local machine running AiiDA. More specifically, it can do any of the following:

  • Take any number of files from any number of RemoteData folders in a remote machine and copy them in the local repository of a single newly created FolderData node.
  • Take any number of files from any number of FolderData nodes in the local machine and copy them in a single newly created RemoteData folder in a given remote machine.

See the Transferring data how-to for more details.

Profile configuration improvements 👌

The way the global/profile configuration is accessed has undergone a number of distinct changes (#4712):

  • When loaded, the config.json (found in the .aiida folder) is now validated against a JSON Schema that can be found in aiida/manage/configuration/schema.
  • The schema includes a number of new global/profile options, including: transport.task_retry_initial_interval, transport.task_maximum_attempts, rmq.task_timeout and logging.aiopika_loglevel (#4583).
  • The cache_config.yml has now also been deprecated and merged into the config.json, as part of the profile options. This merge will be handled automatically, upon first load of the config.json using the new AiiDA version.

In-line with these changes, the verdi config command has been refactored into separate commands, including verdi config list, verdi config set, verdi config unset and verdi config caching.

See the Configuring profile options and Configuring caching how-tos for more details.

Command-line additions and improvements 👌

In addition to verdi config, numerous other new commands and options have been added to verdi:

  • Deprecated verdi export and verdi import commands (replaced by new verdi archive) (#4710)
  • Added verdi group delete --delete-nodes, to also delete the nodes in a group during its removal (#4578).
  • Improved verdi group remove-nodes command to warn when requested nodes are not in the specified group (#4728).
  • Added exception to the projection mapping of verdi process list, for example to use in debugging as: verdi process list -S excepted -P ctime pk exception (#4786).
  • Added verdi database summary (#4737): This prints a summary of the count of each entity and (optionally) the list of unique identifiers for some entities.
  • Improved verdi process play performance, by only querying for active processes with the --all flag (#4671)
  • Added the verdi database version command (#4613): This shows the schema generation and version of the database of the given profile, useful mostly for developers when debugging.
  • Improved verdi node delete performance (#4575): The logic has been re-written to greatly reduce the time to delete large amounts of nodes.
  • Fixed verdi quicksetup --non-interactive, to ensure it does not include any user prompts (#4573)
  • Fixed verdi --version when used in editable mode (#4576)

API additions and improvements 👌

The base Node class now evaluates equality based on the node's UUID (#4753). For example, loading the same node twice will always resolve as equivalent: load_node(1) == load_node(1). Note that existing, class specific, equality relationships will still override the base class behaviour, for example: Int(99) == Int(99), even if the nodes have different UUIDs. This behaviour for subclasses is still under discussion at: #1917

When hashing nodes for use with the caching features, -0. is now converted to 0., to reduce issues with differing hashes before/after node storage (#4648). Known failure modes for hashing are now also raised with the HashingError exception (#4778).

Both aiida.tools.delete_nodes (#4578) and aiida.orm.to_aiida_type (#4672) have been exposed for use in the public API.

A pathlib.Path instance can now be used for the file argument of SinglefileData (#3614)

Type annotations have been added to all inputs/outputs of functions and methods in aiida.engine (#4669) and aiida/orm/nodes/processes (#4772). As outlined in PEP 484, this improves static code analysis and, for example, allows for better auto-completion and type checking in many code editors.

New REST API Query endpoint ✨

The /querybuilder endpoint is the first POST method available for AiiDA's RESTful API (#4337)

The POST endpoint returns what the QueryBuilder would return, when providing it with a proper queryhelp dictionary (see the documentation here). Furthermore, it returns the entities/results in the "standard" REST API format - with the exception of link_type and link_label keys for links (these particular keys are still present as type and label, respectively).

For security, POST methods can be toggled on/off with the verdi restapi --posting/--no-posting options (it is on by default). Although note that this option is not yet strictly public, since its naming may be changed in the future!

See AiiDA REST API documentation for more details.

Additional Changes

  • Fixed the direct scheduler which, in combination with SshTransport, was hanging on submit command (#4735). In the ssh transport, to emulate 'chdir', the current directory is now kept in memory, and every command prepended with cd FOLDER_NAME && ACTUALCOMMAND.

  • In aiida.tools.ipython.ipython_magics, load_ipython_extension has been deprecated in favour of register_ipython_extension (#4548).

  • Refactored .ci/ folder to make tests more portable and easier to understand (#4565) The ci/ folder had become cluttered, containing configuration and scripts for both the GitHub Actions and Jenkins CI. This change moved the GH actions specific scripts to .github/system_tests, and refactored the Jenkins setup/tests to use molecule in the .molecule/ folder.

  • For aiida-core development, the pytest requires_rmq marker and config_with_profile fixture have been added (#4739 and #4764)

v1.5.2 - 2020-12-07

Note: release v1.5.1 was skipped due to a problem with the uploaded files to PyPI.

Bug fixes

  • Dict: accessing an inexistent key now raises a KeyError (instead of AttributeError) [#4577]
  • Config: make writing to disk as atomic as possible [#4607]
  • Config: do not overwrite when loaded and not migrated [#4605]
  • SqlAlchemy: fix bug in Group extras migration with revision 0edcdd5a30f0 [#4602]

Developers

  • SqlAlchemy: improve the alembic migration code [#4602] 4607
  • CI: manually install numpy to prevent incompatible releases [#4615]

v1.5.0 - 2020-11-13

In this minor version release, support for Python 3.9 is added [#4301], while support for Python 3.5 is dropped [#4386]. This version is compatible with all current Python versions that are not end-of-life:

  • 3.6
  • 3.7
  • 3.8
  • 3.9

Features

  • Process functions (calcfunction and workfunction) can now be submitted to the daemon just like CalcJobs and WorkChains [#4539]
  • REST API: list endpoints at base URL [#4412]
  • REST API: new full_types_count endpoint that counts the number of nodes for each type of node [#4277]
  • ProcessBuilder: allow unsetting of inputs through attribute deletion [#4419]
  • verdi migrate: make --in-place work across different file systems [#4393]

Improvements

  • Added remaining original documentation that didn't make it into the first step of the recent major overhaul of v1.3.0
  • verdi process show: order by ctime and print process label [#4407]
  • LinkManager: fix inaccuracy in exception message for non-existent link [#4388]
  • Add reset method toProgressReporterAbstract [#4522]
  • Improve the deprecation warning for Node.open outside context manager [#4434]

Bug fixes

  • SlurmScheduler: fix bug in validation of job resources [#4555]
  • Fix ZeroDivisionError in worker slots check [#4513]
  • CalcJob: only attempt to clean up the retrieve temporary folder after parsing if it is present [#4379]
  • Add missing entry point groups to the mapping [#4395]
  • REST API: the process_type can now identify pathological empty-stringed or null entries in the database [#4277]

Developers

  • verdi group delete: deprecate and ignore the --clear option [#4357]
  • Replace old format string interpolation with f-strings [#4400]
  • CI: move pylint configuration to pyproject.toml [#4411]
  • CI: use -e install for tox + add docker-compose for isolated RabbitMQ [#4375]
  • CI: add coverage patch threshold to prevent false positives [#4413]
  • CI: Allow for mypy type checking of third-party imports [#4553]

Dependencies

  • Update requirement pytest~=6.0 and use pyproject.toml [#4410]

Archive (import/export) refactor

  • The refactoring goal was to pave the way for the implementation of a new archive format in v2.0.0 ( aiidateamAEP005)
  • Three abstract+concrete interface classes are defined; writer, reader, migrator, which are independent of theinternal structure of the archive. These classes are used within the export/import code.
  • The code in aiida/tools/importexport has been largely re-written, in particular adding aiida/toolsimportexport/archive, which contains this code for interfacing with an archive, and does not require connectionto an AiiDA profile.
  • The export logic has been re-written; to minimise required queries (faster), and to allow for "streaming" datainto the writer (minimise RAM requirement with new format). It is intended that a similiar PR will be made for the import code.
  • A general progress bar implementation is now available in aiida/common/progress_reporter.py. All correspondingCLI commands now also have --verbosity option.
  • Merged PRs:
    • Refactor export archive (#4448 & #4534)
    • Refactor import archive (#4510)
    • Refactor migrate archive (#4532)
    • Add group extras to archive (#4521)
    • Refactor cmdline progress bar (#4504 & #4522)
  • Updated archive version from 0.9 -> 0.10 (#4561
  • Deprecations: export_zip, export_tar, export_tree, extract_zip, extract_tar and extract_treefunctions. silent key-word in the export function
  • Removed: ZipFolder class

v1.4.4

This patch is a backport for 2 of the fixes in v1.5.2.

Bug fixes

  • Dict: accessing an inexistent key now raises a KeyError (instead of an AttributeError) [#4616]

Developers

  • CI: manually install numpy to prevent incompatible releases [#4615]

v1.4.3

Bug fixes

  • RabbitMQ: update topika requirement to fix SSL connections and remove validation of broker_parameters from profile [#4542]
  • Fix UnboundLocalError in aiida.cmdline.utils.edit_multiline_template, which affected verdi code/computer setup [#4436]

v1.4.2

Critical bug fixes

  • CalcJob: make sure local_copy_list files do not end up in the node's repository folder [#4415]

v1.4.1

Improvements

  • verdi setup: forward broker defaults to interactive mode [#4405]

Bug fixes

  • verdi setup: improve validation and help string of broker virtual host [#4408]
  • Implement next and iter for the Node.open deprecation wrapper [#4399]
  • Dependencies: increase minimum version requirement plumpy~=0.15.1 to suppress noisy warning at end of interpreter that ran processes [#4398]

v1.4.0

Improvements

  • Add defaults for configure options of the SshTransport plugin [#4223]
  • verdi status: distinguish database schema version incompatible [#4319]
  • SlurmScheduler: implement parse_output to detect OOM and OOW [#3931]

Features

  • Make the RabbitMQ connection parameters configurable [#4341]
  • Add infrastructure to parse scheduler output for CalcJobs [#3906]
  • Add support for "peer" authentication with PostgreSQL [#4255]
  • Add the --paused flag to verdi process list [#4213]
  • Make the loglevel of the daemonizer configurable [#4276]
  • Transport: add option to not use a login shell for all commands [#4271]
  • Implement skip_orm option for SqlAlchemy Group.remove_nodes [#4214]
  • Dict: allow setting attributes through setitem and AttributeManager [#4351]
  • CalcJob: allow nested target paths for local_copy_list [#4373]
  • verdi export migrate: add --in-place flag to migrate archive in place [#4220]

Bug fixes

  • verdi: make --prepend-text and --append-text options properly interactive [#4318]
  • verdi computer test: fix failing result in harmless stderr responses [#4316]
  • QueryBuilder: Accept empty string for entity_type in append method [#4299]
  • verdi status: do not except when no profile is configured [#4253]
  • ArithmeticAddParser: attach output before checking for negative value [#4267]
  • CalcJob: fix bug in retrieve_list affecting entries without wildcards [#4275]
  • TemplateReplacerCalculation: make files namespace dynamic [#4348]

Developers

  • Rename folder test.fixtures to test.static [#4219]
  • Remove all files from the pre-commit exclude list [#4196]
  • ORM: move attributes/extras methods of frontend and backend nodes to mixins [#4376]

Dependencies

  • Dependencies: update minimum requirement paramiko~=2.7 [#4222]
  • Depedencies: remove upper limit and allow numpy~=1.17 [#4378]

Deprecations

  • Deprecate getter and setter methods of Computer properties [#4252]
  • Deprecate methods that refer to a computer's label as name [#4309]

Changes

  • BaseRestartWorkChain: do not run process_handler when exit_codes=[] [#4380]
  • SlurmScheduler: always raise for non-zero exit code [#4332]
  • Remove superfluous ERROR_NO_RETRIEVED_FOLDER from CalcJob subclasses [#3906]

v1.3.1

Bug fixes:

  • Fix a file handle leak due to the Runner not closing the event loop if it created it itself [#4307]
  • ArithmeticAddParser: attach output before checking for negative value [#4267]

v1.3.0

Improvements

  • Comprehensive restructuring and revamp of the online documentation [#4141]
  • Improve defaults for verdi computer configure ssh [#4055]
  • Provenance graphs: enable highlighting specific node classes (and highlight root node by default) [#4081]

Performance

  • Enable event-based monitoring of work chain child processes (they were being polled every second) [#4154]
  • Increase the default for runner.poll.interval config option from 1 to 60 seconds [#4150]
  • Increase the efficiency of the SqlaGroup.nodes iterator [#4094]

Features

  • Add a progress bar for export and import related functionality [#3599]
  • Enable loading config.yml files from URL in verdi commands with --config option [#3977]
  • QueryBuilder: add the flat argument to the .all() method [#3945]
  • verdi status: add --no-rmq flag to skip the RabbitMQ check [#4181]
  • Add support for process functions in verdi plugin list [#4117]
  • Allow profile selection in ipython magic %aiida [#4071]
  • Support more complex formula formats in aiida.orm.data.cif.parse_formula [#3954]

Bug fixes

  • BaseRestartWorkChain: do not assume metadata exists in inputs in run_process [#4210]
  • BaseRestartWorkChain: fix bug in inspect_process [#4166]
  • BaseRestartWorkChain: fix the "unhandled failure mechanism" for dealing with failures of subprocesses [#4155]
  • Fix exception handling in commands calling list_repository_contents [#3968]
  • Fix bug in Code.get_full_text_info [#4083]
  • Fix bug in verdi daemon restart --reset [#3969]
  • Fix tab-completion for LinkManager and AttributeManager [#3985]
  • CalcJobResultManager: fix bug that broke tab completion [#4187]
  • SshTransport.gettree: allow non-existing nested target directories [#4175]
  • CalcJob: move job resource validation to the Scheduler class fixing a problem for the SGE and LSF scheduler plugins [#4192]
  • WorkChain: guarantee to maintain order of appended awaitables [#4156]
  • Add support for binary files to the various verdi cat commands [#4077]
  • Ensure verdi group show --limit respects limit even in raw mode [#4092]
  • QueryBuilder: fix type string filter generation for Group subclasses [#4144]
  • Raise when calling Node.objects.delete for node with incoming links [#4168]
  • Properly handle multiple requests to threaded REST API [#3974]
  • NodeTranslator: do not assume get_export_formats exists [#4188]
  • Only color directories in verdi node repo ls --color [#4195]

Developers

  • Add arithmetic workflows and restructure calculation plugins [#4124]
  • Add minimal mypy run to the pre-commit hooks. [#4176]
  • Fix timeout in tests.cmdline.commands.test_process:test_pause_play_kill [#4052]
  • Revise update-dependency flow to resolve issue #3930 [#3957]
  • Add GitHub action for transifex upload [#3958]

Deprecations

  • The get_valid_schedulers class method of the Scheduler class has been deprecated in favor of aiida.plugins.entry_point.get_entry_point_names [#4192]

v1.2.1

In the fixing of three bugs, three minor features have been added along the way.

Features

  • Add config option daemon.worker_process_slots to configure the maximum number of concurrent tasks each daemon worker can handle [#3949]
  • Add config option daemon.default_workers to set the default number of workers to be started by verdi daemon start [#3949]
  • CalcJob: make submit script filename configurable through the metadata.options [#3948]

Bug fixes

  • CalcJob: fix bug in idempotency check of upload transport task [#3948]
  • REST API: reintroduce CORS headers, the lack of which was breaking the Materials Cloud provenance explorer [#3951]
  • Remove the equality operator of ExitCode which caused the serialization of workchains to fail if put in the workchain context [#3940]

Deprecations

  • The hookup argument of aiida.restapi.run_api and the --hookup option of verdi restapi are deprecated [#3951]

v1.2.0

Features

  • ExitCode: make the exit message parameterizable through templates [#3824]
  • GroupPath: a utility to work with virtual Group hierarchies [#3613]
  • Make Group sub classable through entry points [#3882][#3903][#3926]
  • Add auto-complete support for CodeParamType and GroupParamType [#3926]
  • Add export archive migration for Group type strings [#3912]
  • Add the -v/--version option to verdi export migrate [#3910]
  • Add the -l/--limit option to verdi group show [#3857]
  • Add the --order-by/--order-direction options to verdi group list [#3858]
  • Add prepend_text and append_text to aiida_local_code_factory pytest fixture [#3831]
  • REST API: make it easier to call run_api in wsgi scripts [#3875]
  • Plot bands with only one kpoint [#3798]

Bug fixes

  • Improved validation for CLI parameters [#3894]
  • Ensure unicity when creating instances of Autogroup [#3650]
  • Prevent nodes without registered entry points from being stored [#3886]
  • Fix the RotatingFileHandler configuration of the daemon logger[#3891]
  • Ensure log messages are not duplicated in daemon log file [#3890]
  • Convert argument to str in aiida.common.escaping.escape_for_bash [#3873]
  • Remove the return statement of RemoteData.getfile() [#3742]
  • Support for BandsData nodes without StructureData ancestors [#3817]

Deprecations

  • Deprecate --group-type option in favor of --type-string for verdi group list [#3926]

Documentation

  • Docs: link to documentation of other libraries via intersphinx mapping [#3876]
  • Docs: remove extra advanced_plotting from install instructions [#3860]
  • Docs: consistent use of "plugin" vs "plugin package" terminology [#3799]

Developers

  • Deduplicate code for tests of archive migration code [#3924]
  • CI: use GitHub Actions services for PostgreSQL and RabbitMQ [#3901]
  • Move aiida.manage.external.pgsu to external package pgsu [#3892]
  • Cleanup the top-level directory of the repository [#3738]
  • Remove unused orm.implementation.utils module [#3877]
  • Revise dependency management workflow [#3771]
  • Re-add support for Coverage reports through codecov.io [#3618]

v1.1.1

Changes

  • Emit a warning when input port specifies a node instance as default [#3466]
  • BaseRestartWorkChain: require process handlers to be instance methods [#3782]
  • BaseRestartWorkChain: add method to enable/disable process handlers [#3786]
  • Docker container: remove conda activation from configure-aiida.sh script [#3791]
  • Add fixtures to clear the database before or after tests [#3783]
  • verdi status: add the configuration directory path to the output [#3587]
  • QueryBuilder: add support for datetime.date objects in filters [#3796]

Bug fixes

  • Fix bugs in Node._store_from_cache and Node.repository.erase that could result in calculations not being reused [#3777]
  • Caching: fix configuration spec and validation [#3785]
  • Write migrated config to disk in Config.from_file [#3797]
  • Validate label string at code setup stage [#3793]
  • Reuse prepend_text and append_text in verdi computer/code duplicate [#3788]
  • Fix broken imports of urllib in various locations including verdi import [#3767]
  • Match headers with actual output for verdi data structure list [#3756]
  • Disable caching for the Data node subclass (this should not affect usual caching behavior) [#3807]

v1.1.0

Nota Bene: although this is a minor version release, the support for python 2 is dropped (#3566) following the reasoning outlined in the corresponding AEP001. Critical bug fixes for python 2 will be supported until July 1 2020 on the v1.0.* release series. With the addition of python 3.8 (#3719), this version is now compatible with all current python versions that are not end-of-life:

  • 3.5
  • 3.6
  • 3.7
  • 3.8

Features

  • Add the AiiDA Graph Explorer (AGE) a generic tool for traversing provenance graph [#3686]
  • Add the BaseRestartWorkChain which makes it easier to write a simple work chain wrapper around another process with automated error handling [#3748]
  • Add provenance_exclude_list attribute to CalcInfo data structure, allowing to prevent calculation input files from being permanently stored in the repository [#3720]
  • Add the verdi node repo dump command [#3623]
  • Add more methods to control cache invalidation of completed process node [#3637]
  • Allow documentation to be build without installing and configuring AiiDA [#3669]
  • Add option to expand namespaces in sphinx directive [#3631]

Performance

  • Add node_type to list of immutable model fields, preventing repeated database hits [#3619]
  • Add cache for entry points in an entry point group [#3622]
  • Improve the performance when exporting many groups [#3681]

Changes

  • CalcJob: move presubmit call from CalcJob.run to Waiting.execute [#3666]
  • CalcJob: do not pause when exception thrown in the presubmit [#3699]
  • Move CalcJob spec validator to corresponding namespaces [#3702]
  • Move getting completed job accounting to retrieve transport task [#3639]
  • Move last_job_info from JSON-serialized string to dictionary [#3651]
  • Improve SqlAlchemy session handling for QueryBuilder [#3708]
  • Use built-in open instead of io.open, which is possible now that python 2 is no longer supported [#3615]
  • Add non-zero exit code for verdi daemon status [#3729]

Bug fixes

  • Deal with unreachable daemon worker in get_daemon_status [#3683]
  • Django backend: limit batch size for bulk_create operations [#3713]
  • Make sure that datetime conversions ignore None [#3628]
  • Allow empty key_filename in verdi computer configure ssh and reuse cooldown time when reconfiguring [#3636]
  • Update pyyaml to v5.1.2 to prevent arbitrary code execution [#3675]
  • QueryBuilder: fix validation bug and improve message for in operator [#3682]
  • Consider 'AIIDA_TEST_PROFILE' in 'get_test_backend_name' [#3685]
  • Ensure correct types for QueryBuilder().dict() with multiple projections [#3695]
  • Make local modules importable when running verdi run [#3700]
  • Fix bug in upload_calculation for CalcJobs with local codes [#3707]
  • Add imports from urllib to dbimporters [#3704]

Developers

  • Moved continuous integration from Travis to Github actions [#3571]
  • Replace custom unit test framework for pytest and move all tests to tests top level directory [#3653][#3674][#3715]
  • Cleaned up direct dependencies and relaxed requirements where possible [#3597]
  • Set job poll interval to zero in localhost pytest fixture [#3605]
  • Make command line deprecation warnings visible with test profile [#3665]
  • Add docker image with minimal running AiiDA instance [#3722]

v1.0.1

Improvements

  • Improve the backup mechanism of the configuration file: unique backup written at each update [#3581]
  • Forward verdi code delete to verdi node delete [#3546]
  • Homogenize and improve output of verdi computer test [#3544]
  • Scheduler SLURM: support UNLIMITED and NOT_SET as values for requested walltimes [#3586]
  • Set default for the safe_interval option of verdi computer configure [#3590]
  • Create backup of configuration file before migrating [#3568]
  • Add python_requires to setup.json necessary for future dropping of python 2 [#3574]
  • Remove unused QB methods/functions [#3526]
  • Move pgtest argument of TemporaryProfileManager to constructor [#3486]
  • Add filename argument to SinglefileData constructor [#3517]
  • Mention machine in SSH connection exception message [#3536]
  • Docs: Expand on QB order_by information [#3548]
  • Replace deprecated pymatgen site.species_and_occu with site.species [#3480]
  • QueryBuilder: add deepcopy implementation and queryhelp property [#3524]

Bug fixes

  • Fix verdi calcjob gotocomputer when key_filename is missing [#3593]
  • Fix bug in database migrations where schema generation determination excepts for old databases [#3582]
  • Fix false positive for verdi database integrity detect-invalid-links [#3591]
  • Config migration: handle edge case where daemon key is missing from daemon_profiles [#3585]
  • Raise when unable to detect name of local timezone [#3576]
  • Fix bug for CalcJob dry runs with store_provenance=False [#3513]
  • Migrations for legacy and now illegal default link label _return, export version upped to 0.8 [#3561]
  • Fix REST API attributes_filter and extras_filter [#3556]
  • Fix bug in plugin Factory classes for python 3.7 [#3552]
  • Make PolishWorkChains checkpointable [#3532]
  • REST API: fix generator of full node namespace [#3516]

v1.0.0

Overview of changes

The following is a summary of the major changes and improvements from v0.12.* to v1.0.0.

  • Faster workflow engine: the new message-based engine powered by RabbitMQ supports tens of thousands of processes per hour and greatly speeds up workflow testing. You can now run one daemon per AiiDA profile.
  • Faster database queries: the switch to JSONB for node attributes and extras greatly improves query speed and reduces storage size by orders of magnitude.
  • Robust calculations: AiiDA now deals with network connection issues (automatic retries with backoff mechanism, connection pooling, ...) out of the box. Workflows and calculations are all Processes and can be "paused" and "played" anytime.
  • Better verdi commands: the move to the click framework brings homogenous command line options across all commands (loading nodes, ...). You can easily add new commands through plugins.
  • Easier workflow development: Input and output namespaces, reusing specs of sub-processes and less boilerplate code simplify writing WorkChains and CalcJobs, while also enabling powerful auto-documentation features.
  • Mature provenance model: Clear separation between data provenance (Calculations, Data) and logical provenance (Workflows). Old databases can be migrated to the new model automatically.
  • python3 compatible: AiiDA 1.0 is compatible with both python 2.7 and python 3.6 (and later). Python 2 support will be dropped in the coming months.

Detailed list of changes

Below a (non-exhaustive) list of changes by category. Changes between 1.0 alpha/beta releases are not included - for those see the changelog of the corresponding releases.

Engine and daemon

  • Implement the concept of an "exit status" for all calculations, allowing a programmatic definition of success or failure for all processes [#1189]
  • All calculations now go through the Process layer, homogenizing the state of work and job calculations [#1125]
  • Allow None as default for arguments of process functions [#2582]
  • Implement the new calcfunction decorator. [#2203]
  • Each profile now has its own daemon that can be run completely independently in parallel [#1217]
  • Polling based daemon has been replaced with a much faster event-based daemon [#1067]
  • Replaced Celery with Circus as the daemonizer of the daemon [#1213]
  • The daemon can now be stopped without loading the database, making it possible to stop it even if the database version does not match the code [#1231]
  • Implement exponential backoff retry mechanism for transport tasks [#1837]
  • Pause CalcJob when transport task falls through exponential backoff [#1903]
  • Separate CalcJob submit task in folder upload and scheduler submit [#1946]
  • Each daemon worker now respects an optional minimum scheduler polling interval [#1929]
  • Make the execmanager.retrieve_calculation idempotent'ish [#3142]
  • Make the execmanager.upload_calculation idempotent'ish [#3146]
  • Make the execmanager.submit_calculation idempotent'ish [#3188]
  • Implement a PluginVersionProvider for processes to automatically add versions of aiida-core and plugin to process nodes [#3131]

Processes

  • Implement the ProcessBuilder which simplifies the definition of Process inputs and the launching of a Process [#1116]
  • Namespaces added to the port containers of the ProcessSpec class [#1099]
  • Convention of leading underscores for non-storable inputs is replaced with a proper non_db attribute of the Port class [#1105]
  • Implement a Sphinx extension for the WorkChain class to automatically generate documentation from the workchain definition [#1155]
  • WorkChains can now expose the inputs and outputs of another WorkChain, which is great for writing modular workflows [#1170]
  • Add built-in support and API for exit codes in WorkChains [#1640], [#1704], [#1681]
  • Implement method for CalcJobNode to create a restart builder [#1962]
  • Add CalculationTools base and entry point aiida.tools.calculations [#2331]
  • Generalize Sphinx workchain extension to processes [#3314]
  • Collapsible namespace in sphinxext [#3441]
  • The retrieve_singlefile_list has been deprecated and is replaced by retrieve_temporary_list [#3041]
  • Automatically set CalcInfo.uuid in CalcJob.run [#2874]
  • Allow the usage of lambda functions for InputPort default values [#3465]

ORM

  • Implementat AuthInfo class which allows custom configuration per configured computer [#1184]
  • Add efficient count method for aiida.orm.groups.Group [#2567]
  • Speed up creation of Nodes in the AiiDA ORM [#2214]
  • Enable use of tuple in QueryBuilder.append for all ORM classes [#1608], [#1607]
  • Refactor the ORM to have explicit front-end and back-end layers [#2190][#2210][#2225][#2227][#2481]
  • Add support for indexing and slicing in orm.Group.nodes iterator [#2371]
  • Add support for process classes to QueryBuilder.append [#2421]
  • Change type of uuids returned by the QueryBuilder to unicode [#2259]
  • The AttributeDict is now constructed recursively for nested dictionaries [#3005]
  • Ensure immutability of CalcJobNode hash before and after storing [#3130]
  • Fix bug in the RemoteData._clean method [#1847]
  • Fix bug in QueryBuilder.first() for multiple projections [#2824]
  • Fix bug in delete_nodes when passing pks of non-existing nodes [#2440]
  • Remove unserializable data from metadata in Log records [#2469]

Data

  • Fix bug in parse_formula for formulas with leading or trailing whitespace [#2186]
  • Refactor Orbital code and fix some bugs [#2737]
  • Fix bug in the store method of CifData which would raise an exception when called more than once [#1136]
  • Allow passing directory path in FolderData constructor [#3359]
  • Add element X to the elements list in order to support unknown species [#1613]
  • Various bug and consistency fixes for CifData and StructureData [#2374]
  • Changes to Data class attributes and TrajectoryData data storage [#2310][#2422]
  • Rename ParameterData to Dict [#2530]
  • Remove the FrozenDict data sub class [#2532]
  • Remove the Error data sub class [#2529]
  • Make Code a real sub class of Data [#2193]
  • Implement the has_atomic_sites and has_unknown_species properties for the CifData class [#1257]
  • Change default library used in _get_aiida_structure (converting CifData to StructureData) from ase to pymatgen [#1257]
  • Add converter for UpfData from UPF to JSON format [#3308]
  • Fix potential inefficiency in aiida.tools.data.cif converters [#3098]
  • Fix bug in KpointsData.reciprocal_cell() [#2779]
  • Improve robustness of parsing versions and element names from UPF files [#2296]

Verdi command line interface

  • Migrate verdi to the click infrastructure [#1795]
  • Add a default user to AiiDA configuration, eliminating the need to retype user information for every new profile [#2734]
  • Implement tab-completion for profile in the -p option of verdi [#2345]
  • Homogenize the interface of verdi quicksetup and verdi setup [#1797]
  • Add the option --version to verdi to display current version [#1811]
  • verdi computer configure can now read inputs from a yaml file through the --config option [#2951]

External database importers

  • Add importer class for the Materials Platform of Data Science API, which hosts the Pauling file data [#1238]
  • Add an importer class for the Materials Project API [#2097]

Database

  • Add an index to columns of DbLink for SqlAlchemy [#2561]
  • Creating unique constraint and indexes at the db_dbgroup_dbnodes table for SqlAlchemy [#1680]
  • Performance improvement for adding nodes to group [#1677]
  • Make UUID columns unique in SqlAlchemy [#2323]
  • Allow PostgreSQL connections via unix sockets [#1721]
  • Drop the unused nodeversion and public columns from the node table [#2937]
  • Drop various unused columns from the user table [#2944]
  • Drop the unused transport_params column from the computer table [#2946]
  • Drop the DbCalcState table [#2198]
  • [Django]: migrate the node attribute and extra schema to use JSONB, greatly improving storage and querying efficiency [#3090]
  • [SqlAlchemy]: Improve speed of node attribute and extra deserialization [#3090]

Export and Import

  • Implement the exporting and importing of node extras [#2416]
  • Implement the exporting and importing of comments [#2413]
  • Implement the exporting and importing of logs [#2393]
  • Add export_parameters to the metadata.json in archive files [#3386]
  • Simplify the data format of export archives, greatly reducing file size [#3090]
  • verdi import automatically migrates archive files of old formats [#2820]

Miscellaneous

  • Refactor unit test managers and add basic fixtures for pytest [#3319]
  • REST API v4: updates to conform with aiida-core==1.0.0 [#3429]
  • Improve decorators using the wrapt library such that function signatures are properly maintained [#2991]
  • Allow empty enabled and disabled keys in caching configuration [#3330]
  • AiiDA now enforces UTF-8 encoding for text output in its files and databases. [#2107]

Backwards-incompatible changes (only a sub-set)

  • Remove aiida.tests and obsolete aiida.storage.tests.test_parsers entry point group [#2778]
  • Implement new link types [#2220]
  • Rename the type strings of Groups and change the attributes name and type to label and type_string [#2329]
  • Make various protected Node methods public [#2544]
  • Rename DbNode.type to DbNode.node_type [#2552]
  • Rename the ORM classes for Node sub classes JobCalculation, WorkCalculation, InlineCalculation and FunctionCalculation [#2184][#2189][#2192][#2195][#2201]
  • Do not allow the copy or deepcopy of Node, except for Data nodes [#1705]
  • Remove aiida.control and aiida.utils top-level modules; reorganize aiida.common, aiida.manage and aiida.tools [#2357]
  • Make the node repository API backend agnostic [#2506]
  • Redesign the Parser class [#2397]
  • [Django]: Remove support for datetime objects from node attributes and extras [#3090]
  • Enforce specific precision in clean_value for floats when computing a node's hash [#3108]
  • Move physical constants from aiida.common.constants to external qe-tools package [#3278]
  • Add type checks to all plugin factories [#3456]
  • Disallow pickle when storing numpy array in ArrayData [#3434]
  • Remove implementation of legacy workflows [#2379]
  • Implement CalcJob process class that replaces the deprecated JobCalculation [#2389]
  • Change the structure of the CalcInfo.local_copy_list [#2581]
  • QueryBuilder: Change 'ancestor_of'/'descendant_of' to 'with_descendants'/'with_ancestors' [#2278]

v0.12.4

Improvements

  • Added new endpoint in rest api to get list of distinct node types [#2745]
  • Travis: port the deploy stage from the development branch [#2816]

Minor bug fixes

  • Corrected the graph export set expansion rules [#2632]

Miscellaneous

  • Backport streamlined quick install instructions from provenance_redesign [#2555]
  • Remove useless chainmap dependency [#2799]
  • Add aiida-core version to docs home page [#3058]
  • Docs: add note on increasing work_mem [#2952]

v0.12.3

Improvements

  • Fast addition of nodes to groups with skip_orm=True [#2471]
  • Add environment.yml for installing dependencies using conda; release of aiida-core on conda-forge channel [#2081]
  • REST API: io tree response now includes link type and node label [#2033] [#2511]
  • Backport postgres improvements for quicksetup [#2433]
  • Backport aiida.get_strict_version (for plugin development) [#2099]

Minor bug fixes

  • Fix security vulnerability by upgrading paramiko to 2.4.2 [#2043]
  • Disable caching for inline calculations (broken since move to workfunction-based implementation) [#1872]
  • Let verdi help return exit status 0 [#2434]
  • Decode dict keys only if strings (backport) [#2436]
  • Remove broken verdi-plug entry point [#2356]
  • verdi node delete (without arguments) no longer tries to delete all nodes [#2545]
  • Fix plotting of BandsData objects [#2492]

Miscellaneous

  • REST API: add tests for random sorting list entries of same type [#2106]
  • Add various badges to README [#1969]
  • Minor documentation improvements [#1955]
  • Add license file to MANIFEST [#2339]
  • Add instructions when verdi import fails [#2420]

v0.12.2

Improvements

  • Support the hashing of uuid.UUID types by registering a hashing function [#1861]
  • Add documentation on plugin cutter [#1904]

Minor bug fixes

  • Make exported graphs consistent with the current node and link hierarchy definition [#1764]
  • Fix link import problem under SQLA [#1769]
  • Fix cache folder copying [#1746] [1752]
  • Fix bug in mixins.py when copying node [#1743]
  • Fix pgtest failures (release-branch) on travis [#1736]
  • Fix plugin: return testrunner result to fail on travis, when tests don't pass [#1676]

Miscellaneous

  • Remove pycrypto dependency, as it was found to have sercurity flaws [#1754]
  • Set xsf as default format for structures visualization [#1756]
  • Delete unused utils/create_requirements.py file [#1702]

v0.12.1

Improvements

  • Always use a bash login shell to execute all remote SSH commands, overriding any system default shell [#1502]
  • Reduced the size of the distributed package by almost half by removing test fixtures and generating the data on the fly [#1645]
  • Removed the explicit dependency upper limit for scipy [#1492]
  • Resolved various dependency requirement conflicts [#1488]

Minor bug fixes

  • Fixed a bug in verdi node delete that would throw an exception for certain cases [#1564]
  • Fixed a bug in the cif endpoint of the REST API [#1490]

v0.12.0

Improvements

  • Hashing, caching and fast-forwarding [#652]
  • Calculation no longer stores full source file [#1082]
  • Delete nodes via verdi node delete [#1083]
  • Import structures using ASE [#1085]
  • StructureData - pymatgen - StructureData roundtrip works for arbitrary kind names [#1285] [#1306] [#1357]
  • Output format of archive file can now be defined for verdi export migrate [#1383]
  • Automatic reporting of code coverage by unit tests has been added [#1422]

Critical bug fixes

  • Add parser_name JobProcess options [#1118]
  • Node attribute reads were not always up to date across interpreters for SqlAlchemy [#1379]

Minor bug fixes

  • Cell vectors not printed correctly [#1087]
  • Fix read-the-docs issues [#1120] [#1143]
  • Fix structure/band visualization in REST API [#1167] [#1182]
  • Fix verdi work list test [#1286]
  • Fix _inline_to_standalone_script in TCODExporter [#1351]
  • Updated reentry to fix various small bugs related to plugin registering [#1440]

Miscellaneous

  • Bump qe-tools version [#1090]
  • Document link types [#1174]
  • Switch to trusty + postgres 9.5 on Travis [#1180]
  • Use raw SQL in sqlalchemy migration of Code [#1291]
  • Document querying of list attributes [#1326]
  • Document running aiida as a daemon service [#1445]
  • Document that Torque and LoadLever schedulers are now fully supported [#1447]
  • Cookbook: how to check the number of queued/running jobs in the scheduler [#1349]

v0.11.4

Improvements

  • PyCifRW upgraded to 4.2.1 [#1073]

Critical bug fixes

  • Persist and load parsed workchain inputs and do not recreate to avoid creating duplicates for default inputs [#1362]
  • Serialize WorkChain context before persisting [#1354]

v0.11.3

Improvements

  • Documentation: AiiDA now has an automatically generated and complete API documentation (using sphinx-apidoc) [#1330]
  • Add JSON schema for connection of REST API to Materials Cloud Explore interface [#1336]

Critical bug fixes

  • FINISHED_KEY and FAILED_KEY variables were not known to AbstractCalculation [#1314]

Minor bug fixes

  • Make 'REST' extra lowercase, such that one can do pip install aiida-core[rest] [#1328]
  • CifData /visualization endpoint was not returning data [#1328]

Deprecations

  • QueryTool (was deprecated in favor of QueryBuilder since v0.8.0) [#1330]

Miscellaneous

  • Add gource config for generating a video of development history [#1337]

v0.11.2:

Critical bug fixes

  • Link types were not respected in Node.get_inputs for SqlAlchemy [#1271]

v0.11.1:

Improvements

  • Support visualization of structures and cif files with VESTA [#1093]
  • Better fallback when node class is not available [#1185]
  • CifData now supports faster parsing and lazy loading [#1190]
  • REST endpoint for CifData, API reports full list of available endpoints [#1228]
  • Various smaller improvements [#1100] [#1182]

Critical bug fixes

  • Restore attribute immutability in nodes [#1111]
  • Fix daemonization issue that could cause aiida daemon to be killed [#1246]

Minor bug fixes

v0.11.0:

Improvements

Core entities

  • Computer: the shebang line is now customizable [#940]
  • KpointsData: deprecate buggy legacy implementation of k-point generation in favor of Seekpath [#1015]
  • Dict: to_aiida_type used on dictionaries now automatically converted to Dict [#947]
  • JobCalculation: parsers can now specify files that are retrieved locally for parsing, but only temporarily, as they are deleted after parsing is completed [#886] [#894]

Plugins

  • Plugin data hooks: plugins can now add custom commands to verdi data [#993]
  • Plugin fixtures: simple-to-use decorators for writing tests of plugins [#716] [#865]
  • Plugin development: no longer swallow ImportError exception during import of plugins [#1029]

Verdi

  • verdi shell: improve tab completion of imports in [#1008]
  • verdi work list: projections for verdi work list [#847]

Miscellaneous

  • Supervisor removal: dependency on unix-only supervisor package removed [#790]
  • REST API: add server info endpoint, structure endpoint can return different file formats [#878]
  • REST API: update endpoints for structure visualization, calculation (includes retrieved input & output list), add endpoints for UpfData and more [#977] [#991]
  • Tests using daemon run faster [#870]
  • Documentation: updated outdated workflow examples [#948]
  • Documentation: updated import/export [#994],
  • Documentation: plugin quickstart [#996],
  • Documentation: parser example [#1003]

Minor bug fixes

  • Fix bug with repository on external hard drive [#982]
  • Fix bug in configuration of pre-commit hooks [#863]
  • Fix and improve plugin loader tests [#1025]
  • Fix broken celery logging [#1033]

Deprecations

  • async from aiida.work.run has been deprecated because it can lead to race conditions and thereby unexpected behavior [#1040]

v0.10.1:

Improvements

  • Improved exception handling for loading db tests [#968]
  • verdi work kill on workchains: skip calculation if it cannot be killed, rather than stopping [#980]
  • Remove unnecessary INFO messages of Alembic for SQLAlchemy backend [#1012]
  • Add filter to suppress unnecessary log messages during testing [#1014]

Critical bug fixes

  • Fix bug in verdi quicksetup on Ubuntu 16.04 and add regression tests to catch similar problems in the future [#976]
  • Fix bug in verdi data list commands for SQLAlchemy backend [#1007]

Minor bug fixes

  • Various bug fixes related to workflows for the SQLAlchemy backend [#952] [#960]

v0.10.0:

Major changes

  • The DbPath table has been removed and replaced with a dynamic transitive closure, because, among others, nested workchains could lead to the DbPath table exploding in size

  • Code plugins have been removed from aiida-core and have been migrated to their own respective plugin repositories and can be found here:

    Each can be installed from pip using e.g. pip install aiida-quantumespresso. Existing installations will require a migration (see update instructions in the documentation). For a complete overview of available plugins you can visit the registry.

Improvements

  • A new entry retrieve_temporary_list in CalcInfo allows to retrieve files temporarily for parsing, while not having to store them permanently in the repository [#903]
  • New verdi command: verdi work kill to kill running workchains [#821]
  • New verdi command: verdi data remote [ls,cat,show] to inspect the contents of RemoteData objects [#743]
  • New verdi command: verdi export migrate allows the migration of existing export archives to new formats [#781]
  • New verdi command: verdi profile delete [#606]
  • Implemented a new option -m for the verdi work report command to limit the number of nested levels to be printed [#888]
  • Added a running field to the output of verdi work list to give the current state of the workchains [#888]
  • Implemented faster query to obtain database statistics [#738]
  • Added testing for automatic SqlAlchemy database migrations through alembic [#834]
  • Exceptions that are triggered in steps of a WorkChain are now properly logged to the Node making them visible through verdi work report [#908]

Critical bug fixes

  • Export will now write the link types to the archive and import will properly recreate the link [#760]
  • Fix bug in workchain persistence that would lead to crashed workchains under certain conditions being resubmitted [#728]
  • Fix bug in the pickling of WorkChain instances containing an _if logical block in the outline [#904]

Minor bug fixes

  • The logger for subclasses of AbstractNode is now properly namespaced to aiida. such that it works in plugins outside of the aiida-core source tree [#897]
  • Fixed a problem with the states of the direct scheduler that was causing the daemon process to hang during submission [#879]
  • Various bug fixes related to the old workflows in combination with the SqlAlchemy backend [#889] [#898]
  • Fixed bug in TCODexporter [#761]
  • verdi profile delete now respects the configured dbport setting [#713]
  • Restore correct help text for verdi --help [#704]
  • Fixed query in the ICSD importer element that caused certain structures to be erroneously skipped [#690]

Miscellaneous

v0.9.1:

Critical bug fixes

  • Workchain steps will no longer be executed multiple times due to process pickles not being locked

Minor bug fixes

  • Fix arithmetic operations for basic numeric types
  • Fixed verdi calculation cleanworkdir after changes in QueryBuilder syntax
  • Fixed verdi calculation logshow exception when called for WorkCalculation nodes
  • Fixed verdi import for SQLAlchemy profiles
  • Fixed bug in reentry and update dependency requirement to v1.0.2
  • Made octal literal string compatible with python 3
  • Fixed broken import in the ASE plugin

Improvements

  • verdi calculation show now properly distinguishes between WorkCalculation and JobCalculation nodes
  • Improved error handling in verdi setup --non-interactive
  • Disable unnecessary console logging for tests

v0.9.0

Data export functionality

  • A number of new functionalities have been added to export band structures to a number of formats, including: gnuplot, matplotlib (both to export a python file, and directly PNG or PDF; both with support of LaTeX typesetting and not); JSON; improved agr (xmgrace) output. Also support for two-color bands for collinear magnetic systems. Added also possibility to specify export-format-specific parameters.
  • Added method get_export_formats() to know available export formats for a given data subclass
  • Added label prettifiers to properly typeset high-symmetry k-point labels for different formats (simple/old format, seekpath, ...) into a number of plotting codes (xmgrace, gnuplot, latex, ...)
  • Improvement of command-line export functionality (more options, possibility to write directly to file, possibility to pass custom options to exporter, by removing its DbPath dependency)

Workchains

  • Crucial bug fix: workchains can now be run through the daemon, i.e. by using aiida.work.submit
  • Enhancement: added an abort and abort_nowait method to WorkChain which allows to abort the workchain at the earliest possible moment
  • Enhancement: added the report method to WorkChain, which allows a workchain developer to log messages to the database
  • Enhancement: added command verdi work report which for a given pk returns the messages logged for a WorkChain through the report method
  • Enhancement: workchain inputs ports with a valid default specified no longer require to explicitly set required=False but is overriden automatically

New plugin system

  • New plugin system implemented, allowing to load aiida entrypoints, and working in parallel with old system (still experimental, though - command line entry points are not fully implemented yet)
  • Support for the plugin registry

Code refactoring

  • Refactoring of Node to move as much as possible of the caching code into the abstract class
  • Refactoring of Data nodes to have the export code in the topmost class, and to make it more general also for formats exporting more than one file
  • Refactoring of a number of Data subclasses to support the new export API
  • Refactoring of BandsData to have export code not specific to xmgrace or a given format, and to make it more general

Documentation

  • General improvements to documentation
  • Added documentation to upgrade AiiDA from v0.8.0 to v0.9.0
  • Added documentation of new plugin system and tutorial
  • Added more in-depth documentation on how to export data nodes to various formats
  • Added explanation on how to export band structures and available formats
  • Added documentation on how to run tests in developer's guide
  • Documented Latex requirements
  • Updated WorkChain documentation for WaitingEquationOfState example
  • Updated AiiDA installation documentation for installing virtual environment
  • Updated documentation to use Jupyter

Enhancements

  • Speedups the travis builds process by caching pip files between runs
  • Node can be loaded by passing the start of its UUID
  • Handled invalid verdi command line arguments; added help texts for same
  • upgraded Paramiko to 2.1.2 and avoided to create empty file when remote connection is failed
  • verdi calculation kill command is now available for SGE plugin
  • Updated Plum from 0.7.8 to 0.7.9 to create a workchain inputs that had default value and evaluated to false
  • Now QueryBuilder will be imported by default for all verdi commands

Bug Fixes

  • Bug fixes in QE input parser
  • Code.get() method accepts the pk in integer or string format whereas Code.get_from_string() method accepts pk only in string format
  • verdi code show command now shows the description of the code
  • Bug fix to check if computer is properly configured before submitting the calculation

Miscellaneous

  • Replacing dependency from old unmantained pyspglib to new spglib
  • Accept BaseTypes as attributes/extras, and convert them automatically to their value. In this way, for instance, it is now possible to pass a Int, Float, Str, ... as value of a dictionary, and store all into a Dict.
  • Switch from pkg_resources to reentry to allow for much faster loading of modules when possible, and therefore allowing for good speed for bash completion
  • Removed obsolete code for Sqlite
  • Removed mayavi2 package from dependencies

v0.8.1

Exporters

  • Upgraded the TCODExporter to produce CIF files, conforming to the newest (as of 2017-04-26) version of cif_tcod.dic.

General

  • Added dependency on six to properly re-raise exceptions

v0.8.0

Installation and setup

  • Simplified installation procedure by adopting standard python package installation method through setuptools and pip
  • Verdi install replaced by verdi setup
  • New verdi command quicksetup to simplify the setup procedure
  • Significantly updated and improved the installation documentation

General

  • Significantly increased test coverage and implemented for both backends
  • Activated continuous integration through Travis CI
  • Application-wide logging is now abstracted and implemented for all backends
  • Added a REST API layer with hook through verdi cli: verdi restapi
  • Improved QueryBuilder
    • Composition model instead of inheritance removing the requirement of determining the implementation on import
    • Added keyword with_dbpath that makes QueryBuilder switch between using the DbPathand not using it.
    • Updated and improved documentation
  • The QueryTool as well as the class Node.query() method are now deprecated in favor of the QueryBuilder
  • Migration of verdi cli to use the QueryBuilder in order to support both database backends
  • Added option --project to verdi calculation list to specify which attributes to print

Documentation

  • Documentation is restructured to improve navigability
  • Added pseudopotential tutorial

Database

  • Dropped support for MySQL and SQLite to fully support efficient features in Postgres like JSONB fields
  • Database efficiency improvements with orders of magnitude speedup for large databases [added indices for daemon queries and node UUID queries]
  • Replace deprecated commit_on_success with atomic for Django transactions
  • Change of how SQLAlchemy internally uses the session and the engine to work also with forks (e.g. in celery)

Workflows

  • Finalized the naming for the new workflow system from workflows2 to work
    • FragmentedWorkFunction is replaced by WorkChain
    • InlineCalculation is replaced by Workfunction
    • ProcessCalculation is replaced by WorkCalculation
  • Old style Workflows can still be called and run from a new style WorkChain
  • Major improvements to the WorkChain and Workfunction implementation
  • Improvements to WorkChain
    • Implemented a return statement for WorkChain specification
    • Logging to the database implemented through WorkChain.report() for debugging
  • Improved kill command for old-style workflows to avoid steps to remaing in running state

Plugins

  • Added finer granularity for parsing PW timers in output
  • New Quantum ESPRESSO and scheduler plugins contributed from EPFL
    • ASE/GPAW plugins: Andrea Cepellotti (EPFL and Berkeley)
    • Quantum ESPRESSO DOS, Projwfc: Daniel Marchand (EPFL and McGill)
    • Quantum ESPRESSO phonon, matdyn, q2r, force constants plugins: Giovanni Pizzi, Nicolas Mounet (EPFL); Andrea Cepellotti (EPFL and Berkeley)
    • Quantum ESPRESSO cp.x plugin: Giovanni Pizzi (EPFL)
    • Quantum ESPRESSO neb.x plugin: Marco Gibertini (EPFL)
    • LSF scheduler: Nicolas Mounet (EPFL)
  • Implemented functionality to export and visualize molecular dynamics trajectories (using e.g. matplotlib, mayavi)
  • Improved the TCODExporter (some fixes to adapt to changes of external libraries, added some additional TCOD CIF tags, various bugfixes)

Various fixes and improvements

  • Fix for the direct scheduler on Mac OS X
  • Fix for the import of computers with name collisions
  • Generated backup scripts are now made profile specific and saved as start_backup_<profile>.py
  • Fix for the vary_rounds warning

v0.7.1

Functionalities

  • Implemented support for Kerberos authentication in the ssh transport plugin.
  • Added _get_submit_script_footer to scheduler base class.
  • Improvements of the SLURM scheduler plugin.
  • Fully functional parsers for Quantumespresso CP and PW.
  • Better parsing of atomic species from PW output.
  • Array classes for projection & xy, and changes in kpoints class.
  • Added codespecific tools for Quantumespresso.
  • verdi code listnow shows local codes too.
  • verdi export can now export non user-defined groups (from their pk).

Fixes

  • Fixed bugs in (old) workflow manager and daemon.
  • Improvements of the efficiency of the (old) workflow manager.
  • Fixed JobCalculation text prepend with multiple codes.

v0.7.0

This release introduces a lot and significant changes & enhancements.

We worked on our new backend and now AiiDA can be installed using SQLAlchemy too. Many of the verdi commands and functionalities have been tested and are working with this backend. The full JSON support provided by SQLAlchemy and the latest versions of PostgreSQL enable significant speed increase in attribute related queries. SQLAlchemy backend choice is a beta option since some last functionalities and commands need to be implemented or improved for this backend. Scripts are provided for the transition of databases from Django backend to SQLAlchemy backend.

In this release we have included a new querying tool called QueryBuilder. It is a powerfull tool allowing users to write complex graph queries to explore the AiiDA graph database. It provides various features like selection of entity properties, filtering of results, combination of entities on specific properties as well as various ways to obtain the final result. It also provides the users an abstract way to query their data without enforcing them to write backend dependent queries.

Last but not least we have included a new workflow engine (in beta version) which is available through the verdi workflow2 command. The new workflows are easier to write (it is as close as writing python as possible), there is seamless mixing of short running tasks with long running (remote) tasks and they encourage users to write reusable workflows. Moreover, debugging of workflows has been made easier and it is possible both in-IDE and through logging.

List of changes:

  • Installation procedure works with SQLAlchemy backend too (SQLAlchemy option is still in beta).
  • Most of the verdi commands work with SQLAlchemy backend.
  • Transition script from Django schema of version 0.7.0 to SQLAlchemy schema of version 0.7.0.
  • AiiDA daemon redesigned and working with both backends (Django & SQLAlchemy).
  • Introducing new workflow engine that allows better debugging and easier to write workflows. It is available under the verdi workflows2 command. Examples are also added.
  • Old workflows are still supported and available under the "verdi workflow" command.
  • Introducing new querying tool (called QueryBuilder). It allows to easily write complex graph queries that will be executed on the AiiDA graph database. Extensive documentation also added.
  • Unifying behaviour of verdi commands in both backends.
  • Upped to version 0.4.2 of plum (needed for workflows2)
  • Implemented the validator and input helper for Quantum ESPRESSO pw.x.
  • Improved the documentation for the pw (and cp) input plugins (for all the flags in the Settings node).
  • Fixed a wrong behavior in the QE pw/cp plugins when checking for the parser options and checking if there were further unknown flags in the Settings node. However, this does not solve yet completely the problem (see issue #219).
  • Implemented validator and input helper for Quantum ESPRESSO pw.x.
  • Added elements with Z=104-112, 114 and 116, in aiida.common.constants.
  • Added method set_kpoints_mesh_from_density in KpointsData class.
  • Improved incremental backup documentation.
  • Added backup related tests.
  • Added an option to test_pw.py to run also in serial.
  • SSH transport, to connect to remote computers via SSH/SFTP.
  • Support for the SGE and SLURM schedulers.
  • Support for Quantum ESPRESSO Car-Parrinello calculations.
  • Support for data nodes to store electronic bands, phonon dispersion and generally arrays defined over the Brillouin zone.

v0.6.0

We performed a lot of changes to introduce in one of our following releases a second object-relational mapper (we will refer to it as back-end) for the management of the used DBMSs and more specifically of PostgreSQL. SQLAlchemy and the latest version of PostgreSQL allows AiiDA to store JSON documents directly to the database and also to query them. Moreover the JSON query optimization is left to the database including also the use of the JSON specific indexes. There was major code restructuring to accommodate the new back-end resulting to abstracting many classes of the orm package of AiiDA.

Even if most of the needed restructuring & code addition has been finished, a bit of more work is needed. Therefore even in this version, Django is the only available back-end for the end user.

However, the users have to update their AiiDA configuration files by executing the migration file that can be found at YOUR_AIIDA_DIR/aiida/common/additions/migration.py as the Linux user that installed AiiDA in your system. (e.g. python YOUR_AIIDA_DIR/aiida/common/additions/migration.py)

List of changes:

  • Back-end selection (Added backend selection). SQLAlchemy selection is disabled for the moment.
  • Migration scripts for the configuration files of AiiDA (SQLAlchemy support).
  • Enriched link description in the database (to enrich the provenance model).
  • Corrections for numpy array and cell. List will be used with cell.
  • Fixed backend import. Verdi commands load as late as possible the needed backend.
  • Abstraction of the basic AiiDA orm classes (like node, computer, data etc). This is needed to support different backends (e.g. Django and SQLAlchemy).
  • Fixes on the structure import from QE-input files.
  • SQLAlchemy and Django benchmarks.
  • UltraJSON support.
  • requirements.txt now also include SQLAlchemy and its dependencies.
  • Recursive way of loading JSON for SQLAlchemy.
  • Improved way of accessing calculations and workflows attached to a workflow step.
  • Added methods to programmatically create new codes and computers.

v0.5.0

General

  • Final paper published, ref: G. Pizzi, A. Cepellotti, R. Sabatini, N. Marzari, and B. Kozinsky, AiiDA: automated interactive infrastructure and database for computational science, Comp. Mat. Sci 111, 218-230 (2016)
  • Core, concrete, requirements kept in requirements.txt and optionals moved to optional_requirements.txt
  • Schema change to v1.0.2: got rid of calc_states.UNDETERMINED

Import/export, backup and code interaction

  • [non-back-compatible] Now supporting multiple codes execution in the same submission script. Plugin interface changed, requires adaptation of the code plugins.
  • Added import support for XYZ files
  • Added support for van der Waals table in QE input
  • Restart QE calculations avoiding using scratch using copy of parent calc
  • Adding database importer for NNIN/C Pseudopotential Virtual Vault
  • Implemented conversion of pymatgen Molecule lists to AiiDA's TrajectoryData
  • Adding a converter from pymatgen Molecule to AiiDA StructureData
  • Queries now much faster when exporting
  • Added an option to export a zip file
  • Added backup scripts for efficient incremental backup of large AiiDA repositories

API

  • Added the possibility to add any kind of Django query in Group.query
  • Added TCOD (Theoretical Crystallography Open Database) importer and exporter
  • Added option to sort by a field in the query tool
  • Implemented selection of data nodes and calculations by group
  • Added NWChem plugin
  • Change default behaviour of symbolic link copy in the transport plugins: "put"/"get" methods -> symbolic links are followed before copy; "copy" methods -> symbolic links are not followed (copied "as is").

Schedulers

  • Explicit Torque support (some slightly different flags)
  • Improved PBSPro scheduler
  • Added new num_cores_per_machine and num_cores_per_mpiproc fields for pbs and torque schedulers (giving full support for MPI+OpenMP hybrid codes)
  • Direct scheduler added, allowing calculations to be run without batch system (i.e. directly call executable)

verdi

  • Support for profiles added: it allows user to switch between database configurations using the verdi profile command
  • Added verdi data structure import --file file.xyz for importing XYZ
  • Added a verdi data upf exportfamily command (to export an upf pseudopotential family into a folder)
  • Added new functionalities to the verdi group command (show list of nodes, add and remove nodes from the command line)
  • Allowing verdi export command to take group PKs
  • Added ASE as a possible format for visualizing structures from command line
  • Added possibility to export trajectory data in xsf format
  • Added possibility to show trajectory data with xcrysden
  • Added filters on group name in verdi group list
  • Added possibility to load custom modules in the verdi shell (additional property verdishell.modules created; can be set with verdi devel setproperty verdishell.modules)
  • Added verdi data array show command, using json_date serialization to display the contents of ArrayData
  • Added verdi data trajectory deposit command line command
  • Added command options --computer and --code to verdi data * deposit
  • Added a command line option --all-users for verdi data * list to list objects, owned by all users