Skip to content

[SPARK-55968][SQL] Do not treat vectorized reader capacity overflow a…#54805

Open
azmatsiddique wants to merge 2 commits intoapache:masterfrom
azmatsiddique:SPARK-55968
Open

[SPARK-55968][SQL] Do not treat vectorized reader capacity overflow a…#54805
azmatsiddique wants to merge 2 commits intoapache:masterfrom
azmatsiddique:SPARK-55968

Conversation

@azmatsiddique
Copy link

What changes were proposed in this pull request?

This PR modifies DataSourceUtils.shouldIgnoreCorruptFileException to stop silently swallowing RuntimeException instances that are the result of integer overflows when vectorizing columns in WritableColumnVector.

Why are the changes needed?

Currently, when spark.sql.files.ignoreCorruptFiles is enabled, Spark will catch any RuntimeException while reading data files and treat the file as corrupted.

In particular, the vectorized Parquet / ORC readers can fail with java.lang.RuntimeException: Cannot reserve additional contiguous bytes in the vectorized reader (integer overflow). Because this overflow exception is a RuntimeException, it is caught by shouldIgnoreCorruptFileException and ignored. This results in the data files being skipped entirely and silently dropping user data without any explicit task failure, rather than warning the user that their vectorized batch size is too large.

This change ensures that this specific capacity exception explicitly propagates to fail the task, allowing users to apply the recommended workarounds (reducing batch size, disabling vectorized reader, etc.).

Does this PR introduce any user-facing change?

Yes.
Previously, if the vectorized reader encountered an integer overflow while reading a file, the file would be silently skipped and its data lost if spark.sql.files.ignoreCorruptFiles was enabled.
With this change, the job will explicitly fail with the RuntimeException: Cannot reserve additional contiguous bytes... error, alerting the user to properly tune their reader settings instead of silently losing data.

How was this patch tested?

  • Manually verified via local build build/sbt "sql/testOnly org.apache.spark.sql.execution.datasources.parquet.ParquetQuerySuite".
  • Verified via existing OrcQuerySuite and ParquetQuerySuite tests involving ignoreCorruptFiles.

Was this patch authored or co-authored using generative AI tooling?

No.

Copy link
Member

@sunchao sunchao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @azmatsiddique for the PR! This is suffice to prevent the error described in this JIRA, but I don't know whether there are other types of exceptions that could be silently ignored as well. cc @cloud-fan @viirya @dbtsai


def shouldIgnoreCorruptFileException(e: Throwable): Boolean = e match {
case _: RuntimeException | _: IOException | _: InternalError => true
case _: RuntimeException | _: IOException | _: InternalError =>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can make this more concise via:

def shouldIgnoreCorruptFileException(e: Throwable): Boolean = e match {
  case _: RuntimeException | _: IOException | _: InternalError =>
    val m = e.getMessage
    m == null || !m.contains("Cannot reserve additional contiguous bytes in the vectorized reader")
  case _ => false
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants