-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add amber TI parser and improve the subsampling code #32
Changes from 12 commits
91a6d03
05fbdf0
915062d
b74c89c
208705b
715f3ee
f64722d
dcf997b
3c29957
a0e1a5b
3a75866
46eb7e9
49cc45c
3bc2c67
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,264 @@ | ||
"""Parsers for extracting alchemical data from amber output files. | ||
Most of the file parsing part are inheriting from alchemical-analysis | ||
Change the final format to pandas to be consistent with the alchemlyb format | ||
""" | ||
|
||
import os | ||
import re | ||
import logging | ||
import pandas as pd | ||
import numpy as np | ||
|
||
from util import anyopen | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Needs to be from .util import anyopen There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Oops, yes, fixed this |
||
|
||
logger = logging.getLogger("alchemlyb.parsers.Amber") | ||
|
||
def convert_to_pandas(file_datum): | ||
"""Convert the data structure from numpy to pandas format""" | ||
data_dic = {} | ||
data_dic["dHdl"] = [] | ||
data_dic["lambdas"] = [] | ||
data_dic["time"] = [] | ||
for frame_index, frame_dhdl in enumerate(file_datum.gradients): | ||
data_dic["dHdl"].append(frame_dhdl) | ||
data_dic["lambdas"].append(file_datum.clambda) | ||
#here we need to convert dt to ps unit from ns | ||
frame_time = file_datum.t0 + (frame_index + 1) * file_datum.dt*1000 | ||
data_dic["time"].append(frame_time) | ||
df = pd.DataFrame(data_dic["dHdl"], columns=["dHdl"], index =pd.Float64Index(data_dic["time"], name='time')) | ||
df["lambdas"] = data_dic["lambdas"][0] | ||
df = df.reset_index().set_index(['time'] + ['lambdas']) | ||
return df | ||
|
||
DVDL_COMPS = ['BOND', 'ANGLE', 'DIHED', '1-4 NB', '1-4 EEL', 'VDWAALS', | ||
'EELEC', 'RESTRAINT'] | ||
_FP_RE = r'[+-]?(\d+(\.\d*)?|\.\d+)([eE][+-]?\d+)?' | ||
|
||
def any_none(sequence): | ||
"""Check if any element of a sequence is None.""" | ||
|
||
for element in sequence: | ||
if element is None: | ||
return True | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This looking very good. The only way that you can sensibly improve the coverage is by adding a test for the function parsing.amber.any_none(sequence) Basically, make sure that the line 42 is hit. Any of the other remaining checks are fairly standard defensive programming and I am not even sure how to test them properly. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Got a test function added for this line |
||
|
||
return False | ||
|
||
def _pre_gen(it, first): | ||
"""A generator that returns first first if it exists.""" | ||
|
||
if first: | ||
yield first | ||
|
||
while it: | ||
yield next(it) | ||
|
||
class SectionParser(object): | ||
""" | ||
A simple parser to extract data values from sections. | ||
""" | ||
def __init__(self, filename): | ||
"""Opens a file according to its file type.""" | ||
self.filename = filename | ||
try: | ||
self.fileh = anyopen(self.filename, 'r') | ||
except Exception as ex: | ||
logging.exception("ERROR: cannot open file %s" % filename) | ||
self.lineno = 0 | ||
|
||
def skip_lines(self, nlines): | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. PEP8 formatting: leave a space between methods. |
||
"""Skip a given number of files.""" | ||
lineno = 0 | ||
for line in self: | ||
lineno += 1 | ||
if lineno > nlines: | ||
return line | ||
return None | ||
|
||
def skip_after(self, pattern): | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. PEP8 formatting: leave a space between methods. |
||
"""Skip until after a line that matches a regex pattern.""" | ||
Found_pattern = False | ||
for line in self: | ||
match = re.search(pattern, line) | ||
if match: | ||
Found_pattern = True | ||
break | ||
return Found_pattern | ||
|
||
def extract_section(self, start, end, fields, limit=None, extra='', | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. PEP8 formatting: leave a space between methods. |
||
debug=False): | ||
""" | ||
Extract data values (int, float) in fields from a section | ||
marked with start and end regexes. Do not read further than | ||
limit regex. | ||
""" | ||
inside = False | ||
lines = [] | ||
for line in _pre_gen(self, extra): | ||
if limit and re.search(limit, line): | ||
break | ||
if re.search(start, line): | ||
inside = True | ||
if inside: | ||
if re.search(end, line): | ||
break | ||
lines.append(line.rstrip('\n')) | ||
line = ''.join(lines) | ||
result = [] | ||
for field in fields: | ||
match = re.search(r' %s\s+=\s+(\*+|%s|\d+)' | ||
% (field, _FP_RE), line) | ||
if match: | ||
value = match.group(1) | ||
# FIXME: assumes fields are only integers or floats | ||
if '*' in value: # Fortran format overflow | ||
result.append(float('Inf') ) | ||
# NOTE: check if this is a sufficient test for int | ||
elif '.' not in value and re.search(r'\d+', value): | ||
result.append(int(value)) | ||
else: | ||
result.append(float(value)) | ||
else: # section may be incomplete | ||
result.append(None) | ||
return result | ||
|
||
def __iter__(self): | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. PEP8 formatting: leave a space between methods. |
||
return self | ||
|
||
def next(self): | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. PEP8 formatting: leave a space between methods. |
||
"""Read next line of the filehandle and check for EOF.""" | ||
self.lineno += 1 | ||
return next(self.fileh) | ||
#make compatible with python 3.6 | ||
__next__ = next | ||
|
||
def close(self): | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. PEP8 formatting: leave a space between methods. |
||
"""Close the filehandle.""" | ||
self.fileh.close() | ||
|
||
def __enter__(self): | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. PEP8 formatting: leave a space between methods. |
||
return self | ||
|
||
def __exit__(self, typ, value, traceback): | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. PEP8 formatting: leave a space between methods. |
||
self.close() | ||
|
||
class FEData(object): | ||
"""A simple struct container to collect data from individual files.""" | ||
|
||
__slots__ = ['clambda', 't0', 'dt', 'T', 'gradients', | ||
'component_gradients'] | ||
|
||
def __init__(self): | ||
self.clambda = -1.0 | ||
self.t0 = -1.0 | ||
self.dt = -1.0 | ||
self.T = -1.0 | ||
self.gradients = [] | ||
self.component_gradients = [] | ||
|
||
def file_validation(outfile): | ||
"""validate the energy output file """ | ||
invalid = False | ||
with SectionParser(outfile) as secp: | ||
line = secp.skip_lines(5) | ||
if not line: | ||
logging.warning(' WARNING: file does not contain any useful data, ' | ||
'ignoring file') | ||
invalid = True | ||
if not secp.skip_after('^ 2. CONTROL DATA FOR THE RUN'): | ||
logging.warning(' WARNING: no CONTROL DATA found, ignoring file') | ||
invalid = True | ||
ntpr, = secp.extract_section('^Nature and format of output:', '^$', | ||
['ntpr']) | ||
nstlim, dt = secp.extract_section('Molecular dynamics:', '^$', | ||
['nstlim', 'dt']) | ||
T, = secp.extract_section('temperature regulation:', '^$', | ||
['temp0']) | ||
if not T: | ||
logging.error('ERROR: Non-constant temperature MD not ' | ||
'currently supported') | ||
invalid = True | ||
clambda, = secp.extract_section('^Free energy options:', '^$', | ||
['clambda'], '^---') | ||
if clambda is None: | ||
logging.warning(' WARNING: no free energy section found, ignoring file') | ||
invalid = True | ||
|
||
if not secp.skip_after('^ 3. ATOMIC '): | ||
logging.warning(' WARNING: no ATOMIC section found, ignoring file\n') | ||
invalid = True | ||
|
||
t0, = secp.extract_section('^ begin time', '^$', ['coords']) | ||
if not secp.skip_after('^ 4. RESULTS'): | ||
logging.warning(' WARNING: no RESULTS section found, ignoring file\n') | ||
invalid = True | ||
if invalid: | ||
return False | ||
file_datum = FEData() | ||
file_datum.clambda = clambda | ||
file_datum.t0 = t0 | ||
file_datum.dt = dt | ||
file_datum.T = T | ||
return file_datum | ||
|
||
def extract_dHdl(outfile): | ||
"""Return gradients `dH/dl` from Amber TI outputfile | ||
Parameters | ||
---------- | ||
outfile : str | ||
Path to Amber .out file to extract data from. | ||
|
||
Returns | ||
------- | ||
dH/dl : Series | ||
dH/dl as a function of time for this lambda window. | ||
""" | ||
file_datum = file_validation(outfile) | ||
if not file_validation(outfile): | ||
return None | ||
finished = False | ||
comps = [] | ||
with SectionParser(outfile) as secp: | ||
line = secp.skip_lines(5) | ||
nensec = 0 | ||
nenav = 0 | ||
old_nstep = -1 | ||
old_comp_nstep = -1 | ||
high_E_cnt = 0 | ||
in_comps = False | ||
for line in secp: | ||
if 'DV/DL, AVERAGES OVER' in line: | ||
in_comps = True | ||
if line.startswith(' NSTEP'): | ||
if in_comps: | ||
#CHECK the result | ||
result = secp.extract_section('^ NSTEP', '^ ---', | ||
['NSTEP'] + DVDL_COMPS, | ||
extra=line) | ||
if result[0] != old_comp_nstep and not any_none(result): | ||
comps.append([float(E) for E in result[1:]]) | ||
nenav += 1 | ||
old_comp_nstep = result[0] | ||
in_comps = False | ||
else: | ||
nstep, dvdl = secp.extract_section('^ NSTEP', '^ ---', | ||
['NSTEP', 'DV/DL'], | ||
extra=line) | ||
if nstep != old_nstep and dvdl is not None \ | ||
and nstep is not None: | ||
file_datum.gradients.append(dvdl) | ||
nensec += 1 | ||
old_nstep = nstep | ||
if line == ' 5. TIMINGS\n': | ||
finished = True | ||
break | ||
if not finished: | ||
logging.warning(' WARNING: prematurely terminated run') | ||
if not nensec: | ||
logging.warning(' WARNING: File %s does not contain any DV/DL data\n' % | ||
outfile) | ||
logging.info('%i data points, %i DV/DL averages' % (nensec, nenav)) | ||
#at this step we get info stored in the FEData object for a given amber out file | ||
file_datum.component_gradients.extend(comps) | ||
#convert file_datum to the pandas format to make it identical to alchemlyb output format | ||
df = convert_to_pandas(file_datum) | ||
return df |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,31 @@ | ||
"""Amber parser tests. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We also like to have a test with at least one of the estimators. Can you add an Amber test to |
||
|
||
""" | ||
from alchemlyb.parsing.amber import extract_dHdl | ||
from alchemlyb.parsing.amber import file_validation | ||
from alchemtest.amber import load_simplesolvated | ||
from alchemtest.amber import load_invalidfiles | ||
|
||
|
||
def test_dHdl(): | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. good start |
||
"""Test that dHdl has the correct form when extracted from files. | ||
|
||
""" | ||
dataset = load_simplesolvated() | ||
|
||
for leg in dataset['data']: | ||
for filename in dataset['data'][leg]: | ||
dHdl = extract_dHdl(filename,) | ||
|
||
assert dHdl.index.names == ['time', 'lambdas'] | ||
assert dHdl.shape == (500, 1) | ||
|
||
def test_invalidfiles(): | ||
"""Test the file validation function to ensure the function returning False if the file is invalid | ||
|
||
""" | ||
invalid_files = load_invalidfiles() | ||
|
||
for invalid_file_list in invalid_files['data']: | ||
for invalid_file in invalid_file_list: | ||
assert file_validation(invalid_file) == False |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,44 @@ | ||
"""Tests for all TI-based estimators in ``alchemlyb``. | ||
|
||
""" | ||
import pytest | ||
|
||
import pandas as pd | ||
|
||
from alchemlyb.parsing import amber | ||
from alchemlyb.estimators import TI | ||
import alchemtest.amber | ||
|
||
|
||
def amber_simplesolvated_charge_dHdl(): | ||
dataset = alchemtest.amber.load_simplesolvated() | ||
|
||
dHdl = pd.concat([amber.extract_dHdl(filename) | ||
for filename in dataset['data']['charge']]) | ||
|
||
return dHdl | ||
|
||
def amber_simplesolvated_vdw_dHdl(): | ||
dataset = alchemtest.amber.load_simplesolvated() | ||
|
||
dHdl = pd.concat([amber.extract_dHdl(filename) | ||
for filename in dataset['data']['vdw']]) | ||
|
||
return dHdl | ||
|
||
|
||
class TIestimatorMixin: | ||
|
||
@pytest.mark.parametrize('X_delta_f', ((amber_simplesolvated_charge_dHdl(), -60.114), | ||
(amber_simplesolvated_vdw_dHdl(), 3.824))) | ||
def test_get_delta_f(self, X_delta_f): | ||
est = self.cls().fit(X_delta_f[0]) | ||
delta_f = est.delta_f_.iloc[0, -1] | ||
assert X_delta_f[1] == pytest.approx(delta_f, rel=1e-3) | ||
|
||
class TestTI(TIestimatorMixin): | ||
"""Tests for TI. | ||
|
||
""" | ||
cls = TI | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use logging instead of
print
:See more comments below.
Note that in order to see the messages, a root logger has to be created elsewhere:
We might add this convenience function elsewhere in the library. I opened issue #34.
Note that you only have to create the top level logger (here called
rootlogger
) once. Loggers are known globally in the interpreter, so you can "attach" to the root logger from anywhere else just by naming the new logger "alchemlyb.OTHER.STUFF".P.S.: I wrote the code above for MDAnalysis (in
MDAnalysis.lib.log
) and I am placing this code snippet into the public domain. (This is necessary because MDAnalysis is under GPL v2 so we cannot just take code from there.)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, will switch to logging instead of printing.
Just want to make sure that I understand the logic correctly, so what I need to do is to add
at the beginning of the code and start to logging like
logging.info ("Write some logging info here") is that correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, exactly.
(Just be aware that until a logger named "alchemlyb" has been created, no logging will be visible, but that's expected and desired behavior. With #34 I will add code to the library to start logging automatically.)