-
-
Notifications
You must be signed in to change notification settings - Fork 18.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BUG: not reproducible error FloatingPointError: overflow encountered in multiply
in the following sequence: read_csv followed by to_datetime with pandas version 2.2.2
#58419
Comments
I also see this FloatingPointError: overflow encountered in multiply error from time to time and it's always with pd.to_datetime where errors=coerce or ignore so far I have done code like below and it would always work even when encountering the random FloatingPointError confirmed by the prints. basically a retry would work
you may want to try using Python pickle to preserve Python objects. I find pandas's csv and excel read/write to be unreliable as an example, having 2 columns of str, one with phone numbers where area codes all starts with 0 and the other column are empty string, then pandas to/read csv and excel would save them to file without problem but on reading back you would get 2 columns of timestamps and/or other non-str objects |
I'm seeing the exact same issue as the OP. No csv read/write in my case, just a pd.to_datetime call that randomly fails, but if re-run on the exact same data then succeeds... |
I also have this issue occurring when reading in a parquet file to a |
I also have the exact same behavior when using |
I converted the floats to integers and it got rid of the error as far as i can tell if that works for anyone else's use case. |
I have the same issue occuring when read from a csv file and then |
Would this result in a loss of accuracy or precision? |
Any updates on this? I've still got Pandas pinned to 2.1.4 because of this bug. |
Facing the same issue while converting from long to timestamp. Solutions that worked for me:
df['utc_timestamp'] = charging_df['utc_long'].astype(int)
# Be careful with NA values. First fill NA values and then convert to int.
utc_timestamp_list = []
for utc in df["utc_long"]:
try:
utc_timestamp_list.append(pd.to_datetime(utc_long, unit="s", utc=True))
except Exception as e:
print(f"Error converting utc_long {utc}: {e}")
df["utc_timestamp"] = utc_timestamp_list |
It looks like the error only occurs for inputs longer than 127. This works fine:
While this leads to the FloatingPointError:
|
@mroeschke and @jbrockmendel you two were listed as authors on #56037 could you take a look? This seems to be a nondeterministic From the 2.2.0 what's new:
If I reorder function calls, or do abitrary other things, sometimes the error disappears. It seems like memory corruption, and I'm not familiar with pandas to debug further. Here's the
Basically if I call this twice:
then it pretty reliably fails, but only if I have some specific function calls before this (into my own code). Batching the series into 127-sized chunks seems to workaround the issue somehow but I'm not super confident in it. Retrying doesn't help. Could either of you take a closer look at the patch you landed? edit: Btw, here's the workaround I'm using try:
ser = pd.to_datetime(ser, utc=True, unit=unit)
except FloatingPointError:
# Pandas 2.2 has a non-deterministic error with large arrays, so
# chunk into 127-sized parts to workaround.
# https://github.com/pandas-dev/pandas/issues/58419
match ser:
case pd.Series():
parts = [
pd.to_datetime(ser.iloc[i : i + 127], utc=True, unit=unit)
for i in range(0, len(ser), 127)
]
ser = pd.concat(parts)
case pd.Index() | np.ndarray():
parts = [
pd.to_datetime(ser[i : i + 127], utc=True, unit=unit)
for i in range(0, len(ser), 127)
]
ser = parts[0].append(parts[1:])
case _:
raise |
take |
A simpler example (fails about 80% of the time on my machine) -
|
Thanks for the analysis @Hnasar. The source of the bug is indeed due to accessing uninitialized memory containing garbage values. I've submitted a PR with a fix. |
Pandas version checks
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
Issue Description
I, sometimes, get the following error with pandas 2.2.2 (I don't have this error with pandas 2.1.4):
The error is not repeatable, hence the loop. I tried to reduce as much as possible the input file while keeping the raised error, this is why I provided a csv file with 200 rows, attached to this issue. I don't know if the issue is due to the
read_csv
(I got the same problem withread_parquet
) or due toto_datetime
. If theread_csv
is outside the loop and I make a deepcopy at the beginning of each loop, I don't have the problem, so my hunch is that this is linked to the reading process (read_csv
in the example).Expected Behavior
I expect the loop content to have the same behaviour, works every time or fails every time.
Installed Versions
INSTALLED VERSIONS
commit : d9cdd2e
python : 3.11.8.final.0
python-bits : 64
OS : Linux
OS-release : 5.15.0-105-generic
Version : #115~20.04.1-Ubuntu SMP Mon Apr 15 17:33:04 UTC 2024
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 2.2.2
numpy : 1.26.4
pytz : 2024.1
dateutil : 2.9.0.post0
setuptools : 65.5.0
pip : 24.0
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : None
pandas_datareader : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
gcsfs : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : 16.0.0
pyreadstat : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
zstandard : None
tzdata : 2024.1
qtpy : None
pyqt5 : None
data.csv
The text was updated successfully, but these errors were encountered: