I have a dataframe and it looks something like this:
[(48500, 53500)]
[(47500, 52500)]
[(45500, 50500)]
[(40700, 45700)]
[(37500, 42500)]
[(37500, 42500)]
[(35000, 40000)]
[(32500, 37500)]
[(32500, 37500)]
[(32500, 37500)]
[(32500, 37500)]
[(32500, 37500)]
[(32500, 37500)]
[(31500, 36500)]
[(31500, 36500)]
[(30419, 35419)]
[(27500, 32500)]
[(27500, 32500)]
[(27500, 32500)]
[(27000, 32000)]
[(26500, 31500)]
[(25000, 30000)]
[(24000, 29000)]
[(23500, 28500)]
[(23420, 28420)]
[(23250, 28250)]
[(20000, 25000)]
[(17500, 22500)]
[(17000, 22000)]
What is the best way to add two numbers together and divide by two?
What I came up with:
for parent in newDf:
for child in parent:
print(int(child[0]) int(child[1]) / 2.0)
I am certain there is a lambda function or one liner that from pandas which could make this task more simple.
CodePudding user response:
Sorry I should off posted the whole data frame.
data = {
"Salary": [(48500, 53500), (47500, 52500),(17000, 22000)],
}
df["Salary"] = df["Salary"].apply(lambda x: np.mean(x))
Output :
Salary
0 51000.0
1 50000.0
2 19500.0