Skip to content

Commit

Permalink
Deployment fixes (#3423)
Browse files Browse the repository at this point in the history
* Update CHANGELOG.md

* general fixes after deployment

* some updates

* some minor changes
  • Loading branch information
antgonza authored Jul 5, 2024
1 parent 0387611 commit 7416314
Show file tree
Hide file tree
Showing 5 changed files with 42 additions and 32 deletions.
19 changes: 19 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,24 @@
# Qiita changelog

Version 2024.07
---------------

Deployed on July 15th, 2024

* On June 14th, 2024 we modified the SPP to use ["fastp & minimap2 against GRCh38.p14 + Phi X 174 + T2T-CHM13v2.0, then Movi against GRCh38.p14, T2T-CHM13v2.0 + Human Pangenome Reference Consortium release 2023"](https://github.com/cguccione/human_host_filtration) to filter human-reads.
* Full refactor of the [DB patching system](https://github.com/qiita-spots/qiita/blob/master/CONTRIBUTING.md#patch-91sql) to make sure that a new production deployment has a fully empty database.
* Fully removed Qiimp from Qiita.
* Users can now add `ORCID`, `ResearchGate` and/or `GoogleScholar` information to their profile and the creation (registration) timestamp is kept in the database. Thank you @jlab.
* Admins can now track and purge non-confirmed users from the database via the GUI (`/admin/purge_users/`). Thank you @jlab.
* Added `qiita.slurm_resource_allocations` to store general job resource usage, which can be populated by `qiita_db.util.update_resource_allocation_table`.
* Added `qiita_db.util.resource_allocation_plot` to generate different models to allocate resources from a given software command based on previous jobs, thank you @Gossty !
* The stats page map can be centered via the configuration file; additionally, the Help and Admin emails are defined also via the configuration files, thank you @jlab !
* ``Sequel IIe``, ``Revio``, and ``Onso`` are now valid instruments for the ``PacBio_SMRT`` platform.
* Added `current_human_filtering` to the prep-information and `human_reads_filter_method` to the artifact to keep track of the method that it was used to human reads filter the raw artifact and know if it's up to date with what is expected via the best practices.
* Added `reprocess_job_id` to the prep-information so we keep track if a preparation has been reprocessed with another job.
* Other general fixes, like [#3385](https://github.com/qiita-spots/qiita/pull/3385), [#3397](https://github.com/qiita-spots/qiita/pull/3397), [#3399](https://github.com/qiita-spots/qiita/pull/3399), [#3400](https://github.com/qiita-spots/qiita/pull/3400), [#3409](https://github.com/qiita-spots/qiita/pull/3409), [#3410](https://github.com/qiita-spots/qiita/pull/3410).


Version 2024.02
---------------

Expand Down
3 changes: 3 additions & 0 deletions qiita_db/support_files/patches/test_db_sql/91.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
-- Just an empty SQL to allow the changes implemented in
-- https://github.com/qiita-spots/qiita/pull/3403 to take effect
SELECT 1;
41 changes: 17 additions & 24 deletions qiita_db/util.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,13 +49,13 @@
from bcrypt import hashpw, gensalt
from functools import partial
from os.path import join, basename, isdir, exists, getsize
from os import walk, remove, listdir, rename, stat
from os import walk, remove, listdir, rename, stat, makedirs
from glob import glob
from shutil import move, rmtree, copy as shutil_copy
from openpyxl import load_workbook
from tempfile import mkstemp
from csv import writer as csv_writer
from datetime import datetime
from datetime import datetime, timedelta
from time import time as now
from itertools import chain
from contextlib import contextmanager
Expand All @@ -64,18 +64,15 @@
import hashlib
from smtplib import SMTP, SMTP_SSL, SMTPException

from os import makedirs
from errno import EEXIST
from qiita_core.exceptions import IncompetentQiitaDeveloperError
from qiita_core.qiita_settings import qiita_config
from subprocess import check_output
import qiita_db as qdb


from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText

from datetime import timedelta
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
Expand Down Expand Up @@ -2730,19 +2727,19 @@ def update_resource_allocation_table(weeks=1, test=None):

dates = ['', '']

slurm_external_id = 0
start_date = datetime.strptime('2023-04-28', '%Y-%m-%d')
with qdb.sql_connection.TRN:
sql = sql_timestamp
qdb.sql_connection.TRN.add(sql)
res = qdb.sql_connection.TRN.execute_fetchindex()
slurm_external_id, timestamp = res[0]
if slurm_external_id is None:
slurm_external_id = 0
if timestamp is None:
dates[0] = datetime.strptime('2023-04-28', '%Y-%m-%d')
else:
dates[0] = timestamp
date1 = dates[0] + timedelta(weeks)
dates[1] = date1.strftime('%Y-%m-%d')
if res:
sei, sd = res[0]
if sei is not None:
slurm_external_id = sei
if sd is not None:
start_date = sd
dates = [start_date, start_date + timedelta(weeks=weeks)]

sql_command = """
SELECT
Expand All @@ -2769,27 +2766,23 @@ def update_resource_allocation_table(weeks=1, test=None):
"""
df = pd.DataFrame()
with qdb.sql_connection.TRN:
sql = sql_command
qdb.sql_connection.TRN.add(sql, sql_args=[slurm_external_id])
qdb.sql_connection.TRN.add(sql_command, sql_args=[slurm_external_id])
res = qdb.sql_connection.TRN.execute_fetchindex()
columns = ["processing_job_id", 'external_id']
df = pd.DataFrame(res, columns=columns)
df = pd.DataFrame(res, columns=["processing_job_id", 'external_id'])
df['external_id'] = df['external_id'].astype(int)

data = []
sacct = [
'sacct', '-p',
'--format=JobID,ElapsedRaw,MaxRSS,Submit,Start,End,CPUTimeRAW,'
'ReqMem,AllocCPUs,AveVMSize', '--starttime', dates[0], '--endtime',
dates[1], '--user', 'qiita', '--state', 'CD']
'ReqMem,AllocCPUs,AveVMSize', '--starttime',
dates[0].strftime('%Y-%m-%d'), '--endtime',
dates[1].strftime('%Y-%m-%d'), '--user', 'qiita', '--state', 'CD']

if test is not None:
slurm_data = test
else:
try:
rvals = StringIO(check_output(sacct)).decode('ascii')
except TypeError as e:
raise e
rvals = check_output(sacct).decode('ascii')
slurm_data = pd.read_csv(StringIO(rvals), sep='|')

# In slurm, each JobID is represented by 3 rows in the dataframe:
Expand Down
6 changes: 3 additions & 3 deletions qiita_pet/handlers/user_handlers.py
Original file line number Diff line number Diff line change
Expand Up @@ -187,11 +187,11 @@ def validator_rgate_id(form, field):
"Receive Processing Job Emails?")

social_orcid = StringField(
"ORCID", [validator_orcid_id], description="0000-0002-0975-9019")
"ORCID", [validator_orcid_id], description="")
social_googlescholar = StringField(
"Google Scholar", [validator_gscholar_id], description="_e3QL94AAAAJ")
"Google Scholar", [validator_gscholar_id], description="")
social_researchgate = StringField(
"ResearchGate", [validator_rgate_id], description="Rob-Knight")
"ResearchGate", [validator_rgate_id], description="")


class UserProfileHandler(BaseHandler):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -76,11 +76,6 @@ Editing a Study

.. _prepare-information-files:

Creating and Working With Sample information
--------------------------------------------

* The :doc:`../qiimp` tab located at the top of the page is a useful tool for creating Qiita and EBI compatible metadata spreadsheets.

Example files
~~~~~~~~~~~~~

Expand Down

0 comments on commit 7416314

Please sign in to comment.