Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spark: failed rows should not be limited to max 100 total results. #2143

Merged
merged 6 commits into from
Aug 1, 2024

Conversation

jzalucki
Copy link
Contributor

Add UT before merge.

@jzalucki jzalucki requested a review from m1n0 July 31, 2024 16:20
@jzalucki jzalucki self-assigned this Jul 31, 2024
@jzalucki jzalucki marked this pull request as ready for review August 1, 2024 09:11
…m:sodadata/soda-core into CLOUD-8218/spark_df_failed_rows_100_limit
Copy link

sonarqubecloud bot commented Aug 1, 2024

@jzalucki jzalucki merged commit 0563049 into main Aug 1, 2024
15 checks passed
@jzalucki jzalucki deleted the CLOUD-8218/spark_df_failed_rows_100_limit branch August 1, 2024 09:56
tombaeyens pushed a commit that referenced this pull request Aug 16, 2024
…al results. (#2143)

* Spark: failed rows should not be limited to max 100 total results.

* UT covering over 100 failed rows.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* Fix sqlserver UT.

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant