添加链接
link之家
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接
Collectives™ on Stack Overflow

Find centralized, trusted content and collaborate around the technologies you use most.

Learn more about Collectives

Teams

Q&A for work

Connect and share knowledge within a single location that is structured and easy to search.

Learn more about Teams

I get ValueError: cannot convert float NaN to integer for following:

df = pandas.read_csv('zoom11.csv')
df[['x']] = df[['x']].astype(int)
  • The "x" is a column in the csv file, I cannot spot any float NaN in the file, and I don't understand the error or why I am getting it.
  • When I read the column as String, then it has values like -1,0,1,...2000, all look very nice int numbers to me.
  • When I read the column as float, then this can be loaded. Then it shows values as -1.0,0.0 etc, still there are no any NaN-s
  • I tried with error_bad_lines = False and dtype parameter in read_csv to no avail. It just cancels loading with same exception.
  • The file is not small (10+ M rows), so cannot inspect it manually, when I extract a small header part, then there is no error, but it happens with full file. So it is something in the file, but cannot detect what.
  • Logically the csv should not have missing values, but even if there is some garbage then I would be ok to skip the rows. Or at least identify them, but I do not see way to scan through file and report conversion errors.
  • Update: Using the hints in comments/answers I got my data clean with this:

    # x contained NaN
    df = df[~df['x'].isnull()]
    # Y contained some other garbage, so null check was not enough
    df = df[df['y'].str.isnumeric()]
    # final conversion now worked
    df[['x']] = df[['x']].astype(int)
    df[['y']] = df[['y']].astype(int)
                    thanks @jezrael , now df[df['x'].isnull()] did identify a row with "NaN" and I could remove it ! Now with another similar field - this seems to have some other garbage which is not int. Is there generic way to find rows which are not convertable to given datatype, so I can identify and garbage them all?
    – JaakL
                    Nov 16, 2017 at 15:38
                    Use pd.to_numeric with errors = coerce instead of astype int then fillna with whatever you want.
    – Bharath M Shetty
                    Nov 16, 2017 at 15:40
                    In v0.24, pandas introduces Nullable Integer Types which support Integer columns with NaNs. See this answer for more information.
    – cs95
                    Apr 16, 2019 at 9:49
    

    Then for removing all non-numeric values use to_numeric with parameter errors='coerce' - to replace non-numeric values to NaNs:

    df['x'] = pd.to_numeric(df['x'], errors='coerce')
    

    And for remove all rows with NaNs in column x use dropna:

    df = df.dropna(subset=['x'])
    

    Last convert values to ints:

    df['x'] = df['x'].astype(int)
                    thanks, this was ok. I updated my question with my lines. Final thing what I do not understand is that why I get False for negative numbers: '-1'.isnumeric() ? Not issue for my data which had x and y >=0, but general question still, as I do not see it in official document.
    – JaakL
                    Nov 16, 2017 at 16:03
                    you're probably seeing that because python is iterpreting '-1' as a string, which is not a number
    – Ben
                    Jun 21, 2018 at 18:09
    

    From v0.24, you actually can. Pandas introduces Nullable Integer Data Types which allows integers to coexist with NaNs.

    Given a series of whole float numbers with missing data,

    s = pd.Series([1.0, 2.0, np.nan, 4.0])
    0    1.0
    1    2.0
    2    NaN
    3    4.0
    dtype: float64
    s.dtype
    # dtype('float64')
    

    You can convert it to a nullable int type (choose from one of Int16, Int32, or Int64) with,

    s2 = s.astype('Int32') # note the 'I' is uppercase
    0      1
    1      2
    2    NaN
    3      4
    dtype: Int32
    s2.dtype
    # Int32Dtype()
    

    Your column needs to have whole numbers for the cast to happen. Anything else will raise a TypeError:

    s = pd.Series([1.1, 2.0, np.nan, 4.0])
    s.astype('Int32')
    # TypeError: cannot safely cast non-equivalent float64 to int32
                    I get an error saying TypeError: object cannot be converted to an IntegerDtype do you have any idea what this means?
    – Ken
                    May 6, 2021 at 4:35
    

    Also, even at the lastest versions of pandas if the column is object type you would have to convert into float first, something like:

    df['column_name'].astype(np.float).astype("Int32")
    

    NB: You have to go through numpy float first and then to nullable Int32, for some reason.

    The size of the int if it's 32 or 64 depends on your variable, be aware you may loose some precision if your numbers are to big for the format.

    I know this has been answered but wanted to provide alternate solution for anyone in the future:

    You can use .loc to subset the dataframe by only values that are notnull(), and then subset out the 'x' column only. Take that same vector, and apply(int) to it.

    If column x is float:

    df.loc[df['x'].notnull(), 'x'] = df.loc[df['x'].notnull(), 'x'].apply(int)
                    the left part does what it should but in the df it stays formated as float. (Python 3.6, Pandas 0.22)
    – InLaw
                    Aug 16, 2018 at 7:35
            

    Thanks for contributing an answer to Stack Overflow!

    • Please be sure to answer the question. Provide details and share your research!

    But avoid

    • Asking for help, clarification, or responding to other answers.
    • Making statements based on opinion; back them up with references or personal experience.

    To learn more, see our tips on writing great answers.