Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add amber TI parser and improve the subsampling code #32

Closed
wants to merge 14 commits into from
Closed
282 changes: 282 additions & 0 deletions src/alchemlyb/parsing/amber.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,282 @@
"""Parsers for extracting alchemical data from amber output files.
Most of the file parsing part are inheriting from alchemical-analysis
Change the final format to pandas to be consistent with the alchemlyb format
"""

import os
import re
import pandas as pd
import numpy as np
import logging
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

empty line between standard library imports and package imports .... but this is nit-picking on my part ;-)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

logging would come after re and be fore numpy because it is standard lib... according to PEP8

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it, push a fixed


logger = logging.getLogger("alchemlyb.parsers.Amber")

Copy link
Member

@orbeckst orbeckst Nov 1, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use logging instead of print:

import logging

logger = logging.getLogger("alchemlyb.parsers.Amber")

See more comments below.

Note that in order to see the messages, a root logger has to be created elsewhere:

import logging

def create_alchemlyb_logger(logfile="alchemlyb.log", name="alchemlyb"):
    """Create a logger that outputs to screen and to `logfile`"""

    logger = logging.getLogger(name)
    logger.setLevel(logging.DEBUG)

    # handler that writes to logfile
    logfile_handler = logging.FileHandler(logfile)
    logfile_formatter = logging.Formatter('%(asctime)s %(name)-12s %(levelname)-8s %(message)s')
    logfile_handler.setFormatter(logfile_formatter)
    logger.addHandler(logfile_handler)

    # define a Handler which writes INFO messages or higher to the sys.stderr
    console_handler = logging.StreamHandler()
    console_handler.setLevel(logging.INFO)
    # set a format which is simpler for console use
    formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
    console_handler.setFormatter(formatter)
    logger.addHandler(console_handler)

    return logger

rootlogger = create_alchemlyb_logger()

We might add this convenience function elsewhere in the library. I opened issue #34.

Note that you only have to create the top level logger (here called rootlogger) once. Loggers are known globally in the interpreter, so you can "attach" to the root logger from anywhere else just by naming the new logger "alchemlyb.OTHER.STUFF".

P.S.: I wrote the code above for MDAnalysis (in MDAnalysis.lib.log) and I am placing this code snippet into the public domain. (This is necessary because MDAnalysis is under GPL v2 so we cannot just take code from there.)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, will switch to logging instead of printing.

Just want to make sure that I understand the logic correctly, so what I need to do is to add

import logging

logger = logging.getLogger("alchemlyb.parsers.Amber")

at the beginning of the code and start to logging like

logging.info ("Write some logging info here") is that correct?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, exactly.

(Just be aware that until a logger named "alchemlyb" has been created, no logging will be visible, but that's expected and desired behavior. With #34 I will add code to the library to start logging automatically.)

def convert_to_pandas(file_datum):
"""Convert the data structure from numpy to pandas format"""
data_dic = {}
data_dic["dHdl"] = []
data_dic["lambdas"] = []
data_dic["time"] = []
for frame_index, frame_dhdl in enumerate(file_datum.gradients):
data_dic["dHdl"].append(frame_dhdl)
data_dic["lambdas"].append(file_datum.clambda)
#here we need to convert dt to ps unit from ns
frame_time = file_datum.t0 + (frame_index + 1) * file_datum.dt*1000
data_dic["time"].append(frame_time)
df = pd.DataFrame(data_dic["dHdl"], columns=["dHdl"], index =pd.Float64Index(data_dic["time"], name='time'))
df["lambdas"] = data_dic["lambdas"][0]
df = df.reset_index().set_index(['time'] + ['lambdas'])
return df

DVDL_COMPS = ['BOND', 'ANGLE', 'DIHED', '1-4 NB', '1-4 EEL', 'VDWAALS',
'EELEC', 'RESTRAINT']
_FP_RE = r'[+-]?(\d+(\.\d*)?|\.\d+)([eE][+-]?\d+)?'
_MAGIC_CMPR = {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not needed anymore since switch to anyopen. Please remove.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed

'\x1f\x8b\x08': ('gzip', 'GzipFile'), # last byte is compression method
'\x42\x5a\x68': ('bz2', 'BZ2File')
}

def any_none(sequence):
"""Check if any element of a sequence is None."""

for element in sequence:
if element is None:
return True
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looking very good. The only way that you can sensibly improve the coverage is by adding a test for the function

parsing.amber.any_none(sequence)

Basically, make sure that the line 42 is hit. Any of the other remaining checks are fairly standard defensive programming and I am not even sure how to test them properly.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got a test function added for this line


return False

def _pre_gen(it, first):
"""A generator that returns first first if it exists."""

if first:
yield first

while it:
yield it.next()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For Python 3 this should be

next(it)

(I think)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, change it to next(it)


class SectionParser(object):
"""
A simple parser to extract data values from sections.
"""
def __init__(self, filename):
"""Opens a file according to its file type."""
self.filename = filename
with open(filename, 'rb') as f:
magic = f.read(3) # NOTE: works because all 3-byte headers
try:
method = _MAGIC_CMPR[magic]
except KeyError:
open_it = open
else:
open_it = getattr(__import__(method[0]), method[1])
try:
self.fileh = open_it(self.filename, 'rb')
self.filesize = os.stat(self.filename).st_size
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This cannot be used for compressed files (see other comments).

I think this line can be deleted because it is useless.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, thinking about this again: try it out, maybe it works and I am wrong.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think you are right, the filesize is not working here for zipped files, remove the filesize related detection in the code.

except Exception as ex:
logging.exception("ERROR: cannot open file %s" % filename)
self.lineno = 0

def skip_lines(self, nlines):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PEP8 formatting: leave a space between methods.

"""Skip a given number of files."""
lineno = 0
for line in self:
lineno += 1
if lineno > nlines:
return line
return None

def skip_after(self, pattern):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PEP8 formatting: leave a space between methods.

"""Skip until after a line that matches a regex pattern."""
for line in self:
match = re.search(pattern, line)
if match:
break
return self.fileh.tell() != self.filesize
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is going to break when we use anyopen because we never uncompress the file to disk and thus the file size of the compressed file is not related to the position in the uncompressed stream fileh.

(More specifically: For a compressed file, self.filesize < self.fileh.tell() for some positions so you might get a random occurrence where this is False in the middle of the file. Conversely, you won't get True at the real end of the file.)

I think this is supposed to check for EOF (end of file). This would need to be implemented differently. Depends on how skip_after() is used. Perhaps can be rewritten along the lines of https://stackoverflow.com/a/24738688/334357 ??

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, thinking about this again: try it out, maybe it works and I am wrong.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I checked that the self.filesize < self.fileh.tell() for the zipped file and it will not be equal at the end of the file so I changed the skip_after() function to not use this detection but explicitly return True if the pattern was found along iterating the whole file


def extract_section(self, start, end, fields, limit=None, extra='',
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PEP8 formatting: leave a space between methods.

debug=False):
"""
Extract data values (int, float) in fields from a section
marked with start and end regexes. Do not read further than
limit regex.
"""
inside = False
lines = []
for line in _pre_gen(self, extra):
if limit and re.search(limit, line):
break
if re.search(start, line):
inside = True
if inside:
if re.search(end, line):
break
lines.append(line.rstrip('\n'))
line = ''.join(lines)
result = []
for field in fields:
match = re.search(r' %s\s+=\s+(\*+|%s|\d+)'
% (field, _FP_RE), line)
if match:
value = match.group(1)
# FIXME: assumes fields are only integers or floats
if '*' in value: # Fortran format overflow
result.append(float('Inf') )
# NOTE: check if this is a sufficient test for int
elif '.' not in value and re.search(r'\d+', value):
result.append(int(value))
else:
result.append(float(value))
else: # section may be incomplete
result.append(None)
return result

def __iter__(self):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PEP8 formatting: leave a space between methods.

return self

def next(self):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PEP8 formatting: leave a space between methods.

"""Read next line of the filehandle and check for EOF."""
self.lineno += 1
curr_pos = self.fileh.tell()
if curr_pos == self.filesize:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will break with compression (see comment above).

Not even sure why this is needed. Can't we just say

def next(self):
  self.lineno += 1
  return next(self.fileh)

This should raise StopIteration when at the end and it should return the line.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that's a better solution. Sorry I didn't think that part much as migrating the code from alchemical-analysis. It looks like that this test successfully catch the untested part of the original code!

raise StopIteration
# NOTE: can't mix next() with seek()
return self.fileh.readline()

def close(self):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PEP8 formatting: leave a space between methods.

"""Close the filehandle."""
self.fileh.close()

def __enter__(self):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PEP8 formatting: leave a space between methods.

return self

def __exit__(self, typ, value, traceback):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PEP8 formatting: leave a space between methods.

self.close()

class FEData(object):
"""A simple struct container to collect data from individual files."""

__slots__ = ['clambda', 't0', 'dt', 'T', 'gradients',
'component_gradients']

def __init__(self):
self.clambda = -1.0
self.t0 = -1.0
self.dt = -1.0
self.T = -1.0
self.gradients = []
self.component_gradients = []

def file_validation(outfile):
"""validate the energy output file """
invalid = False
with SectionParser(outfile) as secp:
line = secp.skip_lines(5)
if not line:
logging.warning(' WARNING: file does not contain any useful data, '
'ignoring file')
invalid = True
if not secp.skip_after('^ 2. CONTROL DATA FOR THE RUN'):
logging.warning(' WARNING: no CONTROL DATA found, ignoring file')
invalid = True
ntpr, = secp.extract_section('^Nature and format of output:', '^$',
['ntpr'])
nstlim, dt = secp.extract_section('Molecular dynamics:', '^$',
['nstlim', 'dt'])
T, = secp.extract_section('temperature regulation:', '^$',
['temp0'])
if not T:
logging.error('ERROR: Non-constant temperature MD not '
'currently supported')
invalid = True
clambda, = secp.extract_section('^Free energy options:', '^$',
['clambda'], '^---')
if clambda is None:
logging.warning(' WARNING: no free energy section found, ignoring file')
invalid = True

if not secp.skip_after('^ 3. ATOMIC '):
logging.warning(' WARNING: no ATOMIC section found, ignoring file\n')
invalid = True

t0, = secp.extract_section('^ begin time', '^$', ['coords'])
if not secp.skip_after('^ 4. RESULTS'):
logging.warning(' WARNING: no RESULTS section found, ignoring file\n')
invalid = True
if invalid:
return False
file_datum = FEData()
file_datum.clambda = clambda
file_datum.t0 = t0
file_datum.dt = dt
file_datum.T = T
return file_datum

def extract_dHdl(outfile):
"""Return gradients `dH/dl` from Amebr TI outputfile
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo: Amber

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

Parameters
----------
outfile : str
Path to Amber .out file to extract data from.

Returns
-------
dH/dl : Series
dH/dl as a function of time for this lambda window.
"""
file_datum = file_validation(outfile)
if not file_validation(outfile):
return None
finished = False
comps = []
with SectionParser(outfile) as secp:
line = secp.skip_lines(5)
nensec = 0
nenav = 0
old_nstep = -1
old_comp_nstep = -1
high_E_cnt = 0
in_comps = False
for line in secp:
if 'DV/DL, AVERAGES OVER' in line:
in_comps = True
if line.startswith(' NSTEP'):
if in_comps:
#CHECK the result
result = secp.extract_section('^ NSTEP', '^ ---',
['NSTEP'] + DVDL_COMPS,
extra=line)
if result[0] != old_comp_nstep and not any_none(result):
comps.append([float(E) for E in result[1:]])
nenav += 1
old_comp_nstep = result[0]
in_comps = False
else:
nstep, dvdl = secp.extract_section('^ NSTEP', '^ ---',
['NSTEP', 'DV/DL'],
extra=line)
if nstep != old_nstep and dvdl is not None \
and nstep is not None:
file_datum.gradients.append(dvdl)
nensec += 1
old_nstep = nstep
if line == ' 5. TIMINGS\n':
finished = True
break
if not finished:
logging.warning(' WARNING: prematurely terminated run')
if not nensec:
logging.warning(' WARNING: File %s does not contain any DV/DL data\n' %
outfile)
logging.info('%i data points, %i DV/DL averages' % (nensec, nenav))
#at this step we get info stored in the FEData object for a given amber out file
file_datum.component_gradients.extend(comps)
#convert file_datum to the pandas format to make it identical to alchemlyb output format
df = convert_to_pandas(file_datum)
return df

#currently just check the code with a simple amber ti output file
Copy link
Member

@orbeckst orbeckst Nov 3, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove the whole main section (also, the print use python 2.7 syntax but we run under python 2 and python 3 – this makes Travis fail at the moment).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it, the main section got removed now.

#likely to switch to the alchmetest frame with more testing cases
if ("__main__") == (__name__):
dataset = "./amber_dataset/ti-0.00.out"
df = extract_dHdl(dataset)
print "Check the df", df
21 changes: 21 additions & 0 deletions src/alchemlyb/tests/parsing/test_amber.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
"""Amber parser tests.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We also like to have a test with at least one of the estimators. Can you add an Amber test to test_fep_estimators.py?


"""

from alchemlyb.parsing.amber import extract_dHdl
from alchemtest.amber import load_simplesolvated


def test_dHdl():
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good start

"""Test that dHdl has the correct form when extracted from files.

"""
dataset = load_simplesolvated()

for leg in dataset['data']:
for filename in dataset['data'][leg]:
dHdl = extract_dHdl(filename,)

assert dHdl.index.names == ['time', 'lambdas']
assert dHdl.shape == (500, 1)

44 changes: 44 additions & 0 deletions src/alchemlyb/tests/test_ti_estimators_amber.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
"""Tests for all TI-based estimators in ``alchemlyb``.

"""
import pytest

import pandas as pd

from alchemlyb.parsing import amber
from alchemlyb.estimators import TI
import alchemtest.amber


def amber_simplesolvated_charge_dHdl():
dataset = alchemtest.amber.load_simplesolvated()

dHdl = pd.concat([amber.extract_dHdl(filename)
for filename in dataset['data']['charge']])

return dHdl

def amber_simplesolvated_vdw_dHdl():
dataset = alchemtest.amber.load_simplesolvated()

dHdl = pd.concat([amber.extract_dHdl(filename)
for filename in dataset['data']['vdw']])

return dHdl


class TIestimatorMixin:

@pytest.mark.parametrize('X_delta_f', ((amber_simplesolvated_charge_dHdl(), -60.114),
(amber_simplesolvated_vdw_dHdl(), 3.824)))
def test_get_delta_f(self, X_delta_f):
est = self.cls().fit(X_delta_f[0])
delta_f = est.delta_f_.iloc[0, -1]
assert X_delta_f[1] == pytest.approx(delta_f, rel=1e-3)

class TestTI(TIestimatorMixin):
"""Tests for TI.

"""
cls = TI