Skip to content

Commit

Permalink
Merge pull request #18 from thcasey3/development
Browse files Browse the repository at this point in the history
Development - v0.0.4
  • Loading branch information
thcasey3 authored Apr 12, 2021
2 parents 1376ea8 + 856234d commit 2c391d3
Show file tree
Hide file tree
Showing 77 changed files with 1,578 additions and 423 deletions.
Binary file not shown.
18 changes: 18 additions & 0 deletions docs/_downloads/2757beafdaf75580867d3ecfccdd780c/process_EPR.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,24 @@
"\n# EPR data processing\n\nAn example user-defined function for processing EPR data with the DNPLab package.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For the function below the call would look something like,\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"\"\"\"\nlrobject = lr.start(\n parent_directory,\n skip=[\".DSC\", \".YGF\", \".par\"], # otherwise duplicates\n classifiers=[\"max_loc\", \"frequency\"],\n function=process_EPR.proc_epr,\n function_args={},\n)\n\nlrobject.drive()\n\"\"\"\n# parent_directory contains Bruker EPR data. Add patterns, skip, date searching, etc.\n# according to the lrengine docs. The function_args are empty in this case. Since DTA\n# and spc files come with companion DSC, YGF, or par files and DNPLab uses any of these,\n# skip these files to avoid duplicates."
]
},
{
"cell_type": "markdown",
"metadata": {},
Expand Down
18 changes: 18 additions & 0 deletions docs/_downloads/2e3e0a7ce4d05099a6b5f2c13a4a3e5c/han_lab.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,24 @@
"\n# Han Lab ODNP data processing\n\nAn example user-defined function for processing Han Lab ODNP data with the DNPLab package.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For the function below the call would look something like,\n\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"\"\"\"\nlrobject = lr.start(\n parent_directory,\n classifiers=[\"tcorr\", \"ksigma\"],\n function=han_lab.calc_odnp,\n function_args=hyd_dict,\n)\n\nlrobject.drive()\n\"\"\"\n# parent_directory contains folders of han_lab data collected using \"rb_dnp1\" at the CNSI\n# facility. Add patterns, skip, date searching, etc. according to the lrengine docs. The\n# \"hyd_dict\" is the dictionary of input constants for dnpHydration, according to DNPLab\n# docs."
]
},
{
"cell_type": "markdown",
"metadata": {},
Expand Down
18 changes: 18 additions & 0 deletions docs/_downloads/311a90bd7c939019bc69fe7b29940a17/han_lab.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,24 @@
"""
# %%

# %% [markdown]
# For the function below the call would look something like,
"""
lrobject = lr.start(
parent_directory,
classifiers=["tcorr", "ksigma"],
function=han_lab.calc_odnp,
function_args=hyd_dict,
)
lrobject.drive()
"""
# parent_directory contains folders of han_lab data collected using "rb_dnp1" at the CNSI
# facility. Add patterns, skip, date searching, etc. according to the lrengine docs. The
# "hyd_dict" is the dictionary of input constants for dnpHydration, according to DNPLab
# docs.
# %%

# %% [markdown]
# Import DNPLab and any other packages that may be needed for the functions,
import dnplab as dnp
Expand Down
Binary file not shown.
19 changes: 19 additions & 0 deletions docs/_downloads/dba8d885c281c4b25e14be40ebaac905/process_EPR.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,25 @@
"""
# %%

# %% [markdown]
# For the function below the call would look something like,
"""
lrobject = lr.start(
parent_directory,
skip=[".DSC", ".YGF", ".par"], # otherwise duplicates
classifiers=["max_loc", "frequency"],
function=process_EPR.proc_epr,
function_args={},
)
lrobject.drive()
"""
# parent_directory contains Bruker EPR data. Add patterns, skip, date searching, etc.
# according to the lrengine docs. The function_args are empty in this case. Since DTA
# and spc files come with companion DSC, YGF, or par files and DNPLab uses any of these,
# skip these files to avoid duplicates.
# %%

# %% [markdown]
# Import DNPLab and any other packages that may be needed for your function,
import dnplab as dnp
Expand Down
41 changes: 32 additions & 9 deletions docs/_sources/auto_examples/han_lab.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,32 @@ An example user-defined function for processing Han Lab ODNP data with the DNPLa

.. GENERATED FROM PYTHON SOURCE LINES 12-13
For the function below the call would look something like,

.. GENERATED FROM PYTHON SOURCE LINES 13-27
.. code-block:: default
"""
lrobject = lr.start(
parent_directory,
classifiers=["tcorr", "ksigma"],
function=han_lab.calc_odnp,
function_args=hyd_dict,
)
lrobject.drive()
"""
# parent_directory contains folders of han_lab data collected using "rb_dnp1" at the CNSI
# facility. Add patterns, skip, date searching, etc. according to the lrengine docs. The
# "hyd_dict" is the dictionary of input constants for dnpHydration, according to DNPLab
# docs.
.. GENERATED FROM PYTHON SOURCE LINES 30-31
Import DNPLab and any other packages that may be needed for the functions,

.. GENERATED FROM PYTHON SOURCE LINES 13-18
.. GENERATED FROM PYTHON SOURCE LINES 31-36
.. code-block:: default
Expand All @@ -37,11 +60,11 @@ Import DNPLab and any other packages that may be needed for the functions,
import copy
.. GENERATED FROM PYTHON SOURCE LINES 22-23
.. GENERATED FROM PYTHON SOURCE LINES 40-41
Function from hydrationGUI of DNPLab for optimizing center of integration window,

.. GENERATED FROM PYTHON SOURCE LINES 23-44
.. GENERATED FROM PYTHON SOURCE LINES 41-62
.. code-block:: default
Expand All @@ -67,11 +90,11 @@ Function from hydrationGUI of DNPLab for optimizing center of integration window
.. GENERATED FROM PYTHON SOURCE LINES 47-48
.. GENERATED FROM PYTHON SOURCE LINES 65-66
Function from hydrationGUI of DNPLab for optimizing phase,

.. GENERATED FROM PYTHON SOURCE LINES 48-108
.. GENERATED FROM PYTHON SOURCE LINES 66-126
.. code-block:: default
Expand Down Expand Up @@ -136,11 +159,11 @@ Function from hydrationGUI of DNPLab for optimizing phase,
.. GENERATED FROM PYTHON SOURCE LINES 111-112
.. GENERATED FROM PYTHON SOURCE LINES 129-130
Function from hydrationGUI of DNPLab for optimizing integration window width,

.. GENERATED FROM PYTHON SOURCE LINES 112-158
.. GENERATED FROM PYTHON SOURCE LINES 130-176
.. code-block:: default
Expand Down Expand Up @@ -191,11 +214,11 @@ Function from hydrationGUI of DNPLab for optimizing integration window width,
.. GENERATED FROM PYTHON SOURCE LINES 161-162
.. GENERATED FROM PYTHON SOURCE LINES 179-180
Auto-process function from hydrationGUI. The function returns zeros where errors are encountered.

.. GENERATED FROM PYTHON SOURCE LINES 162-289
.. GENERATED FROM PYTHON SOURCE LINES 180-307
.. code-block:: default
Expand Down
30 changes: 27 additions & 3 deletions docs/_sources/auto_examples/process_EPR.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -25,21 +25,45 @@ An example user-defined function for processing EPR data with the DNPLab package

.. GENERATED FROM PYTHON SOURCE LINES 12-13
For the function below the call would look something like,

.. GENERATED FROM PYTHON SOURCE LINES 13-28
.. code-block:: default
"""
lrobject = lr.start(
parent_directory,
skip=[".DSC", ".YGF", ".par"], # otherwise duplicates
classifiers=["max_loc", "frequency"],
function=process_EPR.proc_epr,
function_args={},
)
lrobject.drive()
"""
# parent_directory contains Bruker EPR data. Add patterns, skip, date searching, etc.
# according to the lrengine docs. The function_args are empty in this case. Since DTA
# and spc files come with companion DSC, YGF, or par files and DNPLab uses any of these,
# skip these files to avoid duplicates.
.. GENERATED FROM PYTHON SOURCE LINES 31-32
Import DNPLab and any other packages that may be needed for your function,

.. GENERATED FROM PYTHON SOURCE LINES 13-16
.. GENERATED FROM PYTHON SOURCE LINES 32-35
.. code-block:: default
import dnplab as dnp
import numpy as np
.. GENERATED FROM PYTHON SOURCE LINES 19-20
.. GENERATED FROM PYTHON SOURCE LINES 38-39
The function accepts a path to an EPR spectrum file and returns the field value where the spectrum is maximum and the frequency. The function returns zeros where errors are encountered.

.. GENERATED FROM PYTHON SOURCE LINES 20-35
.. GENERATED FROM PYTHON SOURCE LINES 39-54
.. code-block:: default
Expand Down
4 changes: 3 additions & 1 deletion docs/_sources/create.rst.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
=====================
Create a start object
Create a Start Object
=====================

Create an object that contains a DataFrame with at minimum one column that is the names of the files or folders in the supplied directory,
Expand Down Expand Up @@ -97,3 +97,5 @@ the call would look like,
)
and two new columns would be added called 'output1' and 'output2' with the values corresponding to the function outputs. Make sure to have the function accept a path and a single dictionary that contains any additional parameters needed. Also make sure the function returns the outputs in a list that is equal in length to the given list of classifiers. Use the above example function as a template.

If the path is to a **.csv** file that was saved using the **save()** method, the same frame that was created and saved will be re-created in the **start** object (assuming the **.csv** was not modified). However, the other attributes of the **start** object that was saved will be missing, and will need to be defined manually.
2 changes: 1 addition & 1 deletion docs/_sources/define_patterns.rst.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
=================
Defining patterns
Defining Patterns
=================

You may define patterns to classify by. If a single pattern or list of patterns is given, the columns will be named according to the patterns and a bool will be supplied indicating the pattern was or was not found. This example adds the column 'sample1' and puts **True** where found, **False** where not found,
Expand Down
10 changes: 9 additions & 1 deletion docs/_sources/drive.rst.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
======================
Call a custom function
Call a Custom Function
======================

You can even use a custom function that operates on each element of the parent directory to add the outputs as classifiers. Do this my adding the names of the classifier columns, defining the function call, and adding any needed arguments in the form of a dictionary. For example, if the function is:
Expand Down Expand Up @@ -37,3 +37,11 @@ Call the **drive()** method
lrobject.drive()
and two new columns would be added called "output1" and "output2" with the values corresponding to the function outputs. Make sure to have the function accept a path and a single dictionary that contains any additional parameters needed. Also make sure the function returns the outputs in a list that is equal in length to the given list of classifiers. Use the above example function as a template.

If the custom function errors, for example if it tries to operate on a file/folder in the path that is not compatible, it will return the string "null" for the classifier. This can be useful for avoiding tedious reorganizing of directories. Simply run **lrobject.drive()**, collect a frame full of successful runs or "null", then use something like,

.. code-block:: python
lrobject.frame = lrobject.frame[lrobject.frame['par1'] != 'null']
to reduce the frame to only compatible files/folders.
1 change: 1 addition & 0 deletions docs/_sources/index.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,7 @@ Features
reduce_names
find_dates
reduce_dates
narrow_dates
drive
sea
map_directory
Expand Down
2 changes: 1 addition & 1 deletion docs/_sources/map_directory.rst.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
===============
Map a directory
Map a Directory
===============

Use the **map_directory()** method to add **.directory_map** to the **start** object. This is a dictionary with keys that are the directories and values that are lists of filenames found in the directories,
Expand Down
70 changes: 70 additions & 0 deletions docs/_sources/narrow_dates.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
=====================
Narrow the Date Range
=====================

You may also reduce the frame to a specific date or range of dates using the **on_date()** or **in_range()** methods. For example, to keep only the elements of a frame having the date Aug. 2nd 1985,

.. code-block:: python
lrobject.on_date(keep="1985-08-02")
You may also give a list. For example, keep any element with either the date above or Feb. 8th 1988,

.. code-block:: python
lrobject.on_date(keep=["1985-08-02", "1988-02-08"])
To do the exact opposite and remove specific dates, use the keyword **remove=** instead,

.. code-block:: python
lrobject.on_date(remove="1985-08-02")
Or,

.. code-block:: python
lrobject.on_date(remove=["1985-08-02", "1988-02-08"])
To specify a range or ranges rather than specific dates, use **in_range()** instead. Ranges must be two element lists. For example, to keep all elements having dates between Aug. 1st and Sept. 1st 1985,

.. code-block:: python
lrobject.in_range(keep=["1985-08-01", "1985-09-01"])
As with **on_date()**, you may also give a list of ranges (list of lists). For example, keep only elements either in the range above or in the month of Dec. 1985,

.. code-block:: python
lrobject.in_range(keep=[["1985-08-01", "1985-09-01"], ["1985-12-01", "1985-12-31"]])
Again, you may also do the exact opposite with the keyword arg **remove=**,

.. code-block:: python
lrobject.in_range(remove=["1985-08-01", "1985-09-01"])
Or,

.. code-block:: python
lrobject.in_range(remove=[["1985-08-01", "1985-09-01"], ["1985-12-01", "1985-12-31"]])
There is also an option to remove or keep any elements with 0 for date using the keyword arg **strip_zeros=**. Default is **True** for **on_date()** and **in_range()**. For example,

.. code-block:: python
lrobject.on_date(keep="1985-08-02") #removes date=0 elements
lrobject.on_date(keep="1985-08-02", strip_zeros=False) #keeps date=0 elements
And,

.. code-block:: python
lrobject.in_range(keep=["1985-08-01", "1985-09-01"]) #removes date=0
lrobject.in_range(keep=["1985-08-01", "1985-09-01"],
strip_zeros=False) #keeps date=0
2 changes: 1 addition & 1 deletion docs/_sources/planned.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Planned Features

* Built-in interaction with common machine learning packages like scikit-learn and TensorFlow
* More visualization options with seaborn
* Built-in interaction with SQL packages like sqlite
* Built-in interaction with SQL packages
* Built-in interaction with PySpark


Loading

0 comments on commit 2c391d3

Please sign in to comment.