Ad

Pandas - Drop_duplicates Not Working As Expected

- 1 answer

Following an answer from here, I am trying to remove rows from one dataframe which are present in other dataframe.

It works well for this input:

csv1:

sale_date,price,latitude,longitude
Wed May 21 00:00:00 EDT 2008,141000,38.423251,-121.444489
Wed May 21 00:00:00 EDT 2008,146250,38.48742

csv2:

sale_date,price,latitude,longitude
Wed May 21 00:00:00 EDT 2008,146250,38.48742

Code:

>>> a = pd.read_csv('../test.csv', escapechar='\\')
>>> a
                      sale_date   price   latitude   longitude
0  Wed May 21 00:00:00 EDT 2008  141000  38.423251 -121.444489
1  Wed May 21 00:00:00 EDT 2008  146250  38.487420         NaN

>>> b = pd.read_csv('../test1.csv', escapechar='\\')
>>> b
                      sale_date   price  latitude  longitude
0  Wed May 21 00:00:00 EDT 2008  146250  38.48742        NaN

>>> pd.concat([a,b]).drop_duplicates(keep=False)
                      sale_date   price   latitude   longitude
0  Wed May 21 00:00:00 EDT 2008  141000  38.423251 -121.444489

This is working as expected. But as soon as there are other more rows in first csv, it doesnt work.


Scenario 2 with extra row in csv1

csv1:

sale_date,price,latitude,longitude
Wed May 21 00:00:00 EDT 2008,141000,38.423251,-121.444489
Wed May 21 00:00:00 EDT 2008,146250,38.48742
Wed May 21 00:00:00 EDT 2008,147308,38.658246a,-121.375469a

csv2:

sale_date,price,latitude,longitude
Wed May 21 00:00:00 EDT 2008,146250,38.48742

Code:

>>> a = pd.read_csv('../test.csv', escapechar='\\')
>>> a
                      sale_date   price    latitude     longitude
0  Wed May 21 00:00:00 EDT 2008  141000   38.423251   -121.444489
1  Wed May 21 00:00:00 EDT 2008  146250    38.48742           NaN
2  Wed May 21 00:00:00 EDT 2008  147308  38.658246a  -121.375469a

>>> b = pd.read_csv('../test1.csv', escapechar='\\')
>>> b
                      sale_date   price  latitude  longitude
0  Wed May 21 00:00:00 EDT 2008  146250  38.48742        NaN

>>> pd.concat([a,b]).drop_duplicates(keep=False)
                      sale_date   price    latitude     longitude
0  Wed May 21 00:00:00 EDT 2008  141000   38.423251   -121.444489
1  Wed May 21 00:00:00 EDT 2008  146250    38.48742           NaN
2  Wed May 21 00:00:00 EDT 2008  147308  38.658246a  -121.375469a
0  Wed May 21 00:00:00 EDT 2008  146250     38.4874           NaN

Notice that it also changed the latitude value for second duplicated in the merged row to 38.4874 from 38.48742

Am I missing something here or pandas has a bug?

Ad

Answer

Like @ayhan commented there is problem in a DataFrame are strings between numeric in columns latitude and longitude, so all columns are casted to strings.

In another DataFrame are columns by default casted to floats.

One possible solution is use dtype parameter for b DataFrame:

b = pd.read_csv('../test1.csv', escapechar='\\', dtype={'latitude':str, 'longitude':str})

df = pd.concat([a,b]).drop_duplicates(keep=False)
print (df)
                      sale_date   price    latitude     longitude
0  Wed May 21 00:00:00 EDT 2008  141000   38.423251   -121.444489
2  Wed May 21 00:00:00 EDT 2008  147308  38.658246a  -121.375469a

Or use to_numeric for columns in a:

a['latitude'] = pd.to_numeric(a['latitude'], errors='ignore')
a['longitude'] = pd.to_numeric(a['longitude'], errors='ignore')
df = pd.concat([a,b]).drop_duplicates(keep=False)
print (df)
                      sale_date   price    latitude     longitude
0  Wed May 21 00:00:00 EDT 2008  141000   38.423251   -121.444489
2  Wed May 21 00:00:00 EDT 2008  147308  38.658246a  -121.375469a
Ad
source: stackoverflow.com
Ad