Skip to content

Commit

Permalink
Document use of self_destruct with toArrowTable
Browse files Browse the repository at this point in the history
  • Loading branch information
ianmcook committed May 7, 2024
1 parent 792723e commit fd76fa3
Showing 1 changed file with 9 additions and 6 deletions.
15 changes: 9 additions & 6 deletions python/docs/source/user_guide/sql/arrow_pandas.rst
Original file line number Diff line number Diff line change
Expand Up @@ -435,9 +435,12 @@ be verified by the user.
Setting Arrow ``self_destruct`` for memory savings
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Since Spark 3.2, the Spark configuration ``spark.sql.execution.arrow.pyspark.selfDestruct.enabled`` can be used to enable PyArrow's ``self_destruct`` feature, which can save memory when creating a Pandas DataFrame via ``toPandas`` by freeing Arrow-allocated memory while building the Pandas DataFrame.
This option is experimental, and some operations may fail on the resulting Pandas DataFrame due to immutable backing arrays.
Typically, you would see the error ``ValueError: buffer source array is read-only``.
Newer versions of Pandas may fix these errors by improving support for such cases.
You can work around this error by copying the column(s) beforehand.
Additionally, this conversion may be slower because it is single-threaded.
Since Spark 3.2, the Spark configuration ``spark.sql.execution.arrow.pyspark.selfDestruct.enabled``
can be used to enable PyArrow's ``self_destruct`` feature, which can save memory when creating a
Pandas DataFrame via ``toPandas`` by freeing Arrow-allocated memory while building the Pandas
DataFrame. This option can also save memory when creating a PyArrow Table via ``toArrowTable``.
This option is experimental. When used with ``toPandas``, some operations may fail on the resulting
Pandas DataFrame due to immutable backing arrays. Typically, you would see the error
``ValueError: buffer source array is read-only``. Newer versions of Pandas may fix these errors by
improving support for such cases. You can work around this error by copying the column(s)
beforehand. Additionally, this conversion may be slower because it is single-threaded.

0 comments on commit fd76fa3

Please sign in to comment.