-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Trouble running Integration Tests within conda
environment and Trouble Building Documentation
#1329
Comments
Re integration tests - did you set up a postgres database and configure the config as per the instructions? It looks a lot like you don't have a working database. But it also looks like I need to have a good look at the Conda environment, and those instructions as they are still targetting Ubuntu 20.04. |
It seems to be something to do with the way conda repackages postgres. You can get around by editing The documentation definitely needs updating. |
Hi, after changing
postgres is working I can login directly from the command line:
but this is not happening when I acitivate the conda environment
I can login to postgres if I issue psql this way
but the user password is asked on the command line to perform login. I restarted from scratch the whole installation of datacube-core, this time on a new ubuntu 20.04 virtual machine, (not the last ubuntu LTS) following all the steps. No hope. But the error are changed.
So, what about and then where I do need to find libLerc? It's not listed among ubuntu 20.4 packages. I'm sorry for the noise but datacube-core is not installed and setup correctly even on a fresh ubuntu 20.04 LTS box |
|
Hi Paul,
thank you for your answer, I'll check the proposed solution. In the
meantime I give up with the Ubuntu 22.04 box and deleted it, I created a
new fresh ubuntu 20.04 box, then after the initial setup I checkout the
stable branch from github not the development one, I then follow
instructions on github and NOT on readthedocs. Doing this, tests are
performed but with a lot of errors (more than 100). I do not expect to have
so much errors, anyway at least they are performed.
Thank you again.
…-- lp
On Mon, Nov 7, 2022 at 5:29 AM Paul Haesler ***@***.***> wrote:
1.
Ignore Module 'rasterio.crs' has no 'CRS' member, but source is
unavailable? it is just warning from the code checker.
2.
Yes, postgres in a conda install is weird. If you specify a hostname
it seems to work OK.
3.
I have seen the libLerc thing - it appears to a be an issue with the
conda package for lerc. The following will fix it (if you have set up your
conda environment per the instructions):
cd ~/anaconda3/envs/odc_env/lib
ln -s libLerc.so libLerc.so.4
1. You may have better luck installing via pip into a virtualenv on
Ubuntu 22.04.
—
Reply to this email directly, view it on GitHub
<#1329 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABOYWHKEOSU7RXDHDCWVEXTWHCASTANCNFSM6AAAAAARQ2VSLQ>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
@lucapaganotti
if using environment variable as config
another option is to set
⬆️ @SpacemanPaul @pindge @caitlinadams this should really be in the docs, conda environments are very common, and connecting via |
@Kirill888 see @omad 's extensive comments on #1258 |
conda
environment and Trouble Building Documentation
Hi Paul,
I've setup postgresql connection via .datacube.conf and
.datacube_integration.conf in my home folder. Anyway I'm getting errors
during check_code.sh tests that are requiring a local socket connection to
postgres. I'll take only the last:
```ERROR
integration_tests/index/test_search_legacy.py::test_cli_missing_info[experimental-UTC]
- sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) connection
to server on socket "/tmp/.s.PGSQL.5432" failed: No such file or
directory```
Why is this code trying to connect via a local socket to postgresql if I've
defined host, database, username and password in the config file? Is there
another place where postgresql connection has to be defined?
The symbolic link to libLerc.so in the specific environment is working,
thanks.
To tell the true I started with a 22.04 LTS ubuntu image but I had the same
problems and I give up with it falling back to 20.04.
Thanks again for your help.
…----------------------------------------------------------------
-- Dott. Ing. Luca Paganotti
-- Via Baroffio 4
-- 21040 Vedano Olona (VA)
-- 393 1346898
----------------------------------------------------------------
-- softech s.r.l. email:
-- ***@***.***
-- ***@***.***
-- https://github.com/lucapaganotti
-- sourceforge email:
-- ***@***.***
-- skype name: luca.paganotti
[image: http://it.linkedin.com/in/lucapaganotti]
<http://it.linkedin.com/in/lucapaganotti>
-- ---------------------------------------------------------------
-- Mistakes are portals of discovery - JAAJ
--- --------------------------------------------------------------
On Mon, Nov 7, 2022 at 5:29 AM Paul Haesler ***@***.***> wrote:
1.
Ignore Module 'rasterio.crs' has no 'CRS' member, but source is
unavailable? it is just warning from the code checker.
2.
Yes, postgres in a conda install is weird. If you specify a hostname
it seems to work OK.
3.
I have seen the libLerc thing - it appears to a be an issue with the
conda package for lerc. The following will fix it (if you have set up your
conda environment per the instructions):
cd ~/anaconda3/envs/odc_env/lib
ln -s libLerc.so libLerc.so.4
1. You may have better luck installing via pip into a virtualenv on
Ubuntu 22.04.
—
Reply to this email directly, view it on GitHub
<#1329 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABOYWHKEOSU7RXDHDCWVEXTWHCASTANCNFSM6AAAAAARQ2VSLQ>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Hi Kiril,
I've tried to change db_hostname value as you suggested in
~/.datacube_integration.conf but I get the same connection errors, the code
is trying to use a local socket as for example:
```ERROR
integration_tests/index/test_search_legacy.py::test_cli_missing_info[experimental-UTC]
- sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) connection
to server on socket "/tmp/.s.PGSQL.5432" failed: No such file or
directory```
postgresql connection is defined the same way in
~/.datacube_integration.conf and ~/.datacube.conf as
# .datacube_integration.conf
[datacube]
db_hostname: localhost
db_database: agdcintegration
db_username: myuser
db_password: mypassword
index_driver: default
I'm sorry, I did not change the experimental section of the config file
that was left as it was (didn't find anything that told me to do so, I
thought only the datacube section was meaningful ... my mistake ...), also,
looking at the error trace it seems that "experimental" tests are failing
... I then replicated the same setup for the experimental section:
...
[experimental]
db_hostname: localhost
db_database: agdcintegration
db_username: buck
db_password: password
index_driver: postgis
...
and now, running the tests again, I'm still having some failure but not so
many as before and above all no error, they are:
...
tests/api/*test_grid_workflow.py ..F*
...
tests/api/*test_virtual.py .............F....*
...
ntegration_tests/*test_end_to_end.py FF*
...
integration_tests/*test_full_ingestion.py ..F.*
...
with this stack trace:
```
...
----------------------------------------------------------------
TOTAL 13598 1104 92%
==========================================================================
slowest 5 durations
===========================================================================
66.34s call
integration_tests/test_config_tool.py::test_add_example_dataset_types[datacube-US/Pacific]
22.24s setup
integration_tests/index/test_search_legacy.py::test_search_returning[US/Pacific-datacube]
18.71s setup
integration_tests/index/test_search_eo3.py::test_search_returning_eo3[datacube-US/Pacific]
18.25s setup
integration_tests/test_full_ingestion.py::test_process_all_ingest_jobs[US/Pacific-datacube]
17.87s setup
integration_tests/index/test_search_legacy.py::test_search_returning_rows[datacube-US/Pacific]
========================================================================
short test summary info
=========================================================================
SKIPPED [2] integration_tests/test_3d.py:26: could not import
'dcio_example.xarray_3d': No module named 'dcio_example'
SKIPPED [1]
../../../anaconda3/envs/odc/lib/python3.8/site-packages/_pytest/doctest.py:455:
all tests skipped by +SKIP option
XFAIL tests/test_geometry.py::test_lonalt_bounds_more_than_180 - Bounds
computation for large geometries in safe mode is broken
XFAIL tests/test_utils_docs.py::test_merge_with_nan - Merging dictionaries
with content of NaN doesn't work currently
FAILED tests/api/test_grid_workflow.py::test_gridworkflow_with_time_depth -
AssertionError
FAILED tests/api/test_virtual.py::test_aggregate - ValueError: time already
exists as coordinate or variable name.
FAILED
integration_tests/test_end_to_end.py::test_end_to_end[US/Pacific-datacube]
- AssertionError
FAILED integration_tests/test_end_to_end.py::test_end_to_end[UTC-datacube]
- AssertionError
FAILED
integration_tests/test_full_ingestion.py::test_process_all_ingest_jobs[US/Pacific-datacube]
- Failed: Timeout >20.0s
============================================= *5 failed, 888 passed, 3
skipped, 2 xfailed, 16 warnings in 2768.95s (0:46:08)*
==============================================
```
I don't know if it is Ok to go on, I have:
one module 'dcio_example' missing
computation over large geometries is broken in safe mode (what does this
mean?)
3 AssertionError
The datacube seems to be running ok
# .datacube.conf
db_hostname=localhost
db_database=datacube
db_username=myuser
db_password=mypassword
index_driver=default
I've prepared a product and a dataset metadata file and datacube ingested
them.
I would like to have more information about how to write metadata files
expressed in eo3, I wrote my own basing myself on some examples, but I
would like to have detailed info if possible about what can be done with
eo3.
Then I would like then to view my data with datacube-ows.
Thanks for all the help you gave me.
Have a nice day.
…----------------------------------------------------------------
-- Dott. Ing. Luca Paganotti
-- Via Baroffio 4
-- 21040 Vedano Olona (VA)
-- 393 1346898
----------------------------------------------------------------
-- softech s.r.l. email:
-- ***@***.***
-- ***@***.***
-- https://github.com/lucapaganotti
-- sourceforge email:
-- ***@***.***
-- skype name: luca.paganotti
[image: http://it.linkedin.com/in/lucapaganotti]
<http://it.linkedin.com/in/lucapaganotti>
-- ---------------------------------------------------------------
-- Mistakes are portals of discovery - JAAJ
--- --------------------------------------------------------------
On Wed, Nov 9, 2022 at 11:13 PM Kirill Kouzoubov ***@***.***> wrote:
@lucapaganotti <https://github.com/lucapaganotti>
conda based environments assume that postgres is using /tmp/ folder for
unix: file connections to the database. On ubuntu postgres is using
/var/run/postgres/ folder instead, when installing from pip db libraries
are compiled and so have ubuntu defaults, when using conda, system
libraries are not used and so you need to change that default via config.
db_hostname=/var/run/postgresql
if using environment variable as config
export DATACUBE_DB_URL='postgresql:///datacube?host=/var/run/postgresql'
another option is to set PGHOST environment variable, and this should be
used for default configs that leave hostname as default.
export PGHOST=/var/run/postgresql
⬆️ @SpacemanPaul <https://github.com/SpacemanPaul> @pindge
<https://github.com/pindge> @caitlinadams
<https://github.com/caitlinadams> this should really be in the docs,
conda environments are very common, and connecting via localhost is
probably a bit slower, but also requires setting up password for db users
and has some other security implications.
—
Reply to this email directly, view it on GitHub
<#1329 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABOYWHKX735BCBIJ24MHAO3WHQO2FANCNFSM6AAAAAARQ2VSLQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Hi Luca, I think you are safe to proceed from here. It's going to get increasingly difficult to debug the remaining issues. The instructions you are following are in desperate need of a full rewrite, and we are in the early stages of overhauling our whole testing framework to make it easier to run the tests. EO3 is also not as well documented as it could be. You may want to have a look at the opendatacube/eodatasets repository, which includes a tool for testing validity of EO3 documents. If you are sourcing data from a provider that uses a STAC API, there are tools in |
I recently attempted to, am attempting to, follow the instructions to setup in a Parallel's based VM running 22.04.1 LTS (jammy jellyfish).
The reason for this is that
It appears that the version Here is the system wide file (extraction):
|
So ignoring the
This is solved trivially, but becomes a doco issue, perhaps?
But then I find I need
|
Once
|
According to this ancient, closed issue, the version being used is less than 10.XXXX:
This seems to be a show stopper. |
I did some more poking and found that there are multiple versions of
(also getting annoyed as
So given that I am running in a Hmm. |
If I create the
However, the prior issue has been resolved:
Adding in the missing:
|
After adding in
Just to confirm that
Enough for now. |
OK. One more go. I created a
The results are somewhat better, but not pretty:
And yes, according to
|
Any Update on this? |
If the DB hostname in the datacube_integration.conf file is set to localhost, the integration tests fail with the message that implies database connectivity issue. This is because for Ubuntu, postgres uses /var/run/postgresql folder. The fix was mentioned in this issue: opendatacube#1329
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Hi Paul, I'm sorry, I had to long delay my odc activities, I should be able
to restart them on this thursday if no other interrupts will rise. I'm very
sorry for this delay of mine. I'll get back on odc soon.
Have a good day, thanks for your help till now.
…----------------------------------------------------------------
-- Dott. Ing. Luca Paganotti
-- Via Baroffio 4
-- 21040 Vedano Olona (VA)
-- 393 1346898
----------------------------------------------------------------
-- softech s.r.l. email:
-- ***@***.***
-- ***@***.***
-- https://github.com/lucapaganotti
-- sourceforge email:
-- ***@***.***
-- skype name: luca.paganotti
[image: http://it.linkedin.com/in/lucapaganotti]
<http://it.linkedin.com/in/lucapaganotti>
-- ---------------------------------------------------------------
-- Mistakes are portals of discovery - JAAJ
--- --------------------------------------------------------------
On Thu, Jan 19, 2023 at 8:06 PM sathwikreddy56 ***@***.***> wrote:
Any Update on this?
—
Reply to this email directly, view it on GitHub
<#1329 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABOYWHJIK5ZDCARPBVKMIOLWTGGBZANCNFSM6AAAAAARQ2VSLQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Hi @lucapaganotti @permezel @sathwikreddy56 |
Expected behaviour
No test failing
Actual behaviour
at the beginning:
....
at the end
after make html I get
Steps to reproduce the behaviour
follow the setup instructions on https://datacube-core.readthedocs.io/en/latest/installation/setup/ubuntu.html
Environment information
datacube --version
are you using?a fresh ubuntu box at the last LTS version
Thank you
The text was updated successfully, but these errors were encountered: