You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As the docs explain, the binary spreadsheet formats (xls, xlsb) store all numbers as floats, so by default pandas tries to convert floats to integers if it doesn’t lose information (1.0 --> 1). "You can pass ``convert_float=False` to disable this behavior, which may give a slight performance improvement." I tested this on four file types for a spreadsheet of ~440,000 cells, and recorded the best times out of 10 repetitions:
File type
convert_floats=True
convert_floats=False
speed up
xls
1.081
1.036
4.2%
xlsb
3.413
3.357
1.6%
ods
27.798
27.770
0.1%
xlsx
5.182
5.189
-0.1%
convert_floats was probably written for the benefit of .xls files, but the benefit is minor. The .xlsx files even have a slight penalty because openpyxl already converts floats to int where possible, and so pandas converts them back to float if convert_floats=False.
Since .xlsx files are now the most common spreadsheet format (citation: google search), and convert_floats only exists for performance, is it time to remove convert_floats? The spreadsheet engines would keep the behaviour of convert_floats=True and the argument would be deprecated. This change would simplify all four engines, and if anybody really needs their ints as floats, they can always specify a dtype. Note: this possible deprecation came up in #8212 (comment) before dtype was finalized in read_excel.
I can work on this if the community likes the idea.
The text was updated successfully, but these errors were encountered:
I have a .xlsx file that contains a column of mostly strings but with the occasional number. I want those numbers to be read as shown in Excel (integers as integers, floats as floats). Before this deprecation, read_excel read those numbers as integers when appropriate, but now they are always read as floats. Specifying {col: str} doesn't help; the numbers still get parsed as floats. So, for example, a cell showing 121 in Excel is read as 121.0.
My current solution is to manually iterate through that column later and figure those cases out. Is there a better alternative here?
Hi @italo-turing I think you've already found the best alternative if you're constrained to .xlsx as input. If you're free to change your input file to .csv then the mixed-type (object) column will load with both ints and floats.
As the docs explain, the binary spreadsheet formats (xls, xlsb) store all numbers as floats, so by default pandas tries to convert floats to integers if it doesn’t lose information (1.0 --> 1). "You can pass ``convert_float=False` to disable this behavior, which may give a slight performance improvement." I tested this on four file types for a spreadsheet of ~440,000 cells, and recorded the best times out of 10 repetitions:
convert_floats
was probably written for the benefit of .xls files, but the benefit is minor. The .xlsx files even have a slight penalty because openpyxl already converts floats to int where possible, and so pandas converts them back to float ifconvert_floats=False
.Since .xlsx files are now the most common spreadsheet format (citation: google search), and
convert_floats
only exists for performance, is it time to removeconvert_floats
? The spreadsheet engines would keep the behaviour ofconvert_floats=True
and the argument would be deprecated. This change would simplify all four engines, and if anybody really needs their ints as floats, they can always specify adtype
. Note: this possible deprecation came up in #8212 (comment) beforedtype
was finalized inread_excel
.I can work on this if the community likes the idea.
The text was updated successfully, but these errors were encountered: