-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to handle errors from WDFunctionsEngine.execute_sparql_query #189
Comments
Hello @andrewtavis Do you have some code example? Because I think you can try/catch the exception raised by raise_for_status() |
Hello @LeMyst, thanks for the reply :) Here's the part of Scribe-iOS/Data/update_data.py where I'm using with open(query_path) as file: # query_path leads to a .sparql file
query_lines = file.readlines()
query = wdi_core.WDFunctionsEngine.execute_sparql_query("".join(query_lines))
query_results = query["results"]["bindings"] # being a dict if working Maybe a try/catch on the part where I'm running the query? I'm just not sure how to go about dealing with a lack of returned result from Thanks again :) |
I guess that I could try/catch |
A missing thing in your code is the lack of test if with open(query_path) as file: # query_path leads to a .sparql file
query_lines = file.readlines()
try:
query = wdi_core.WDFunctionsEngine.execute_sparql_query("".join(query_lines))
except HTTPError as err:
print(f'HTTPError with {query_name}: {err}')
continue
if query:
query_results = query["results"]["bindings"] # being a dict if working
...
else:
print(f'Nothing returned by the SPARQL server for {query_name}') Can you give the the HTTP error code you have with the raise_for_status() ? I think WikidataIntegrator miss some frequent code, and should have something like this instead of just 503: I hope I correctly understood your issue. |
Thanks for your further help! Issue was that the SPARQL itself was malformed (apologies for not checking more rigorously), but then in this case
Would it maybe be an improvement if WikidataIntegrator could handle these kinds of issues? Not sure what your thoughts are on this. |
Ok, I can reproduce your issue. To explain, because the error code 500 (query timeout by the server) is not catch by execute_sparql_query(), the method automatically fallback to backoff/wdi_backoff because of the raise_for_status(). That's why you can't catch it. By default, wdi_backoff does an infinite amount of retries, you can change that by adding theses lines at the top of your script to change the default behaviour: from wikidataintegrator.wdi_config import config as wdi_config
wdi_config['BACKOFF_MAX_TRIES'] = 2 |
Wonderful :) I just changed it to For documentation, the import for Thanks so much for your help! |
Hello :)
I'm trying to use WikidataIntegrator to update a series of Wikidata generated JSONs that I have in the app Scribe - Language Keyboards. Scribe has tons of
.sparql
files that to now need to be ran independently, and I'm working on an issue where all of these will be ran by a single Python file.Looking at wdi_core.py, I don't see how I'm able to pass over unsuccessful queries and move onto the next ones.
response.raise_for_status()
in execute_sparql_query at times returns an error, and in these cases there doesn't seem to be a returned value viaresults = response.json()
that I could then use as a trigger to skip the operation.If returning
None
or some other response would be acceptable to you all in these cases, I'd be happy to write a PR to do this. Any other suggestions would also be very appreciated :)Thanks for your time!
The text was updated successfully, but these errors were encountered: