Dataframe foreach row

Dataframe foreach row DEFAULT

How To Loop Through Pandas Rows? or How To Iterate Over Pandas Rows?

Sometimes you may want to loop/iterate over Pandas data frame and do some operation on each rows. Pandas has at least two options to iterate over rows of a dataframe.

Let us see examples of how to loop through Pandas data frame. First we will use Pandas iterrows function to iterate over rows of a Pandas dataframe. In addition to iterrows, Pandas also has an useful function itertuples(). We will also see examples of using itertuples() to iterate over rows of Pandas dataframe. There are subtle differences in using each of them, and we will also see them.

Let us use an interesting dataset available in vega_datasets in Python.

# import vega_dataets from vega_datasets import data #import pandas import pandas as pd

Let us see the available datasets in vega_datasets and use flights_2k data set.

# check to see the list of data sets data.list_datasets() flights=data.flights_2k()

It contains flight departure, arrival, and distance information for 2000 flights.

flights.head() date delay destination distance origin 0 2001-01-14 21:55:00 0 SMF 480 SAN 1 2001-03-26 20:15:00 -11 SLC 507 PHX 2 2001-03-05 14:55:00 -3 LAX 714 ELP

How to Iterate Through Rows with Pandas iterrows()

Pandas has iterrows() function that will help you loop through each row of a dataframe. Pandas’ iterrows() returns an iterator containing index of each row and the data in each row as a Series.

Since iterrows() returns iterator, we can use next function to see the content of the iterator. We can see that it iterrows returns a tuple with row index and row data as a Series object.

>next(flights.iterrows()) (0, date 2001-01-14 21:55:00 delay 0 destination SMF distance 480 origin SAN Name: 0, dtype: object)

We can get the content row by taking the second element of the tuple.

row = next(flights.iterrows())[1] row date 2001-01-14 21:55:00 delay 0 destination SMF distance 480 origin SAN Name: 0, dtype: object

We can loop through Pandas dataframe and access the index of each row and the content of each row easily. Here we print the iterator from iterrows() and see that we get an index and Series for each row.

for index, row in flights.head(n=2).iterrows(): print(index, row) 0 date 2001-01-14 21:55:00 delay 0 destination SMF distance 480 origin SAN Name: 0, dtype: object 1 date 2001-03-26 20:15:00 delay -11 destination SLC distance 507 origin PHX Name: 1, dtype: object

Since the row data is returned as a Series, we can use the column names to access each column’s value in the row. Here we loop through each row and we assign the row index and row data to variables named index and row. Then we access row data using the column names of the dataframe.

# iterate over rows with iterrows() for index, row in flights.head().iterrows(): # access data using column names print(index, row['delay'], row['distance'], row['origin']) 0 0 480 SAN 1 -11 507 PHX 2 -3 714 ELP 3 12 342 SJC 4 2 373 SMF

Because iterrows() returns a Series for each row, it does not preserve data types across the rows. However, data types are preserved across columns for DataFrames. Let us see a simple example illustrating this

Let us create a simple data frame with one row with two columns, where one column is an int and the other is a float.

>df = pd.DataFrame([[3, 5.5]], columns=['int_column', 'float_column']) >print(df) int_column float_column 0 3 5.5

Let us use iterrows() to get the content of row and print the data type of int_column. In the original dataframe int_column is an integer. However, when see the data type through iterrows(), the int_column is a float object

>row = next(df.iterrows())[1] >print(row['int_column'].dtype) float64

How to Iterate Over Rows of Pandas Dataframe with itertuples()

A better way to iterate/loop through rows of a Pandas dataframe is to use itertuples() function available in Pandas. As the name itertuples() suggest, itertuples loops through rows of a dataframe and return a named tuple.
The first element of the tuple is row’s index and the remaining values of the tuples are the data in the row. Unlike iterrows, the row data is not stored in a Series.

Let us loop through content of dataframe and print each row with itertuples.

for row in flights.head().itertuples(): print(row) Pandas(Index=0, date=Timestamp('2001-01-14 21:55:00'), delay=0, destination='SMF', distance=480, origin='SAN') Pandas(Index=1, date=Timestamp('2001-03-26 20:15:00'), delay=-11, destination='SLC', distance=507, origin='PHX') Pandas(Index=2, date=Timestamp('2001-03-05 14:55:00'), delay=-3, destination='LAX', distance=714, origin='ELP') Pandas(Index=3, date=Timestamp('2001-01-07 12:30:00'), delay=12, destination='SNA', distance=342, origin='SJC') Pandas(Index=4, date=Timestamp('2001-01-18 12:00:00'), delay=2, destination='LAX', distance=373, origin='SMF')

We can see that itertuples simply returns the content of row as named tuple with associated column names. Therefore we can simply access the data with column names and Index, like

for row in flights.head().itertuples(): print(row.Index,, row.delay)

We will get each row as

0 2001-01-14 21:55:00 0 1 2001-03-26 20:15:00 -11 2 2001-03-05 14:55:00 -3 3 2001-01-07 12:30:00 12 4 2001-01-18 12:00:00 2

Another benefit of itertuples is that it is generally faster than iterrows().

Filed Under: Pandas DataFrame, Pandas iterrows, Pandas itertuples, Pandas loop through rows, Python TipsTagged With: Pandas iterrows, Pandas itertuples, Pandas Loop Through Rows


Iterate rows and columns in Spark dataframe

Consider you have a like below

To loop your Dataframe and extract the elements from the Dataframe, you can either chose one of the below approaches.

Approach 1 - Loop using foreach

Looping a dataframe directly using loop is not possible. To do this, first you have to define schema of dataframe using and then you have to specify this schema to the dataframe.

Please see the result below :

enter image description here

Approach 2 - Loop using rdd

Use on top of your Dataframe. The variable will contain each row of Dataframe of row type. To get each element from a row, use which will contain value of each row in comma separated values. Using function (inbuilt function) you can access each column value of row with index.

Note that there are two drawback of this approach.
1. If there is a in the column value, data will be wrongly split to adjacent column.
2. is an that returns all the data to the driver's memory where driver's memory might not be that much huge to hold the data, ending up with getting the application failed.

I would recommend to use Approach 1.

Approach 3 - Using where and select

You can directly use and which will internally loop and finds the data. Since it should not throws Index out of bound exception, an if condition is used

Approach 4 - Using temp tables

You can register dataframe as temptable which will be stored in spark's memory. Then you can use a select query as like other database to query the data and then collect and save in a variable

You can convert to with . Once turned to you can iterate over it as usual with , or whatever you need


You should use on your :

But note that this will be printed inside the executors JVM's, so norally you won't see the output (unless you work with master = local)

  1. Skyrim cpu usage
  2. Zillow east rochester
  3. One plus otg
  4. Amazon printing machine
  5. Pilgrim cartoons pictures

PySpark provides map(), mapPartitions() to loop/iterate through rows in RDD/DataFrame to perform the complex transformations, and these two returns the same number of records as in the original DataFrame but the number of columns could be different (after add/update).

PySpark also provides foreach() & foreachPartitions() actions to loop/iterate through each Row in a DataFrame but these two returns nothing, In this article, I will explain how to use these methods to get DataFrame column values and process.

PySpark Loop Through Rows in DataFrame Examples

In order to explain with examples, let’s create a DataFrame

Mostly for simple computations, instead of iterating through using map() and foreach(), you should use either DataFrame select() or DataFrame withColumn() in conjunction with PySpark SQL functions.

Below I have map() example to achieve same output as above.

Using map() to Loop Through Rows in DataFrame

PySpark map() Transformation is used to loop/iterate through the PySpark DataFrame/RDD by applying the transformation function (lambda) on every element (Rows and Columns) of RDD/DataFrame. PySpark doesn’t have a map() in DataFrame instead it’s in RDD hence we need to convert DataFrame to RDD first and then use the map(). It returns an RDD and you should Convert RDD to PySpark DataFrame if needed.

If you have a heavy initialization use PySpark mapPartitions() transformation instead of map(), as with mapPartitions() heavy initialization executes only once for each partition instead of every record.

The above example iterates through every row in a DataFrame by applying transformations to the data, since I need a DataFrame back, I have converted the result of RDD to DataFrame with new column names. Note that here I have used index to get the column values, alternatively, you can also refer to the DataFrame column names while iterating.

Another alternative

You can also create a custom function to perform an operation. Below function executes for every DataFrame row from the lambda function.

Using foreach() to Loop Through Rows in DataFrame

Similar to map(), foreach() also applied to every row of DataFrame, the difference being foreach() is an action and it returns nothing. Below are some examples to iterate through DataFrame using for each.

Using pandas() to Iterate

If you have a small dataset, you can also Convert PySpark DataFrame to Pandas and use pandas to iterate through. Use config to enable Apache Arrow with Spark. Apache Spark uses Apache Arrow which is an in-memory columnar format to transfer the data between Python and JVM.

Collect Data As List and Loop Through

You can also Collect the PySpark DataFrame to Driver and iterate through Python, you can also use .


In this article, you have learned iterating/loop through Rows of PySpark DataFrame could be done using map(), foreach(), converting to Pandas, and finally converting DataFrame to Python List. If you want to do simile computations, use either select or withColumn().

Happy Learning !!


Tags: collect(),foreach(),map()

NNK is a Big Data and Spark examples community page, all examples are simple and easy to understand and well tested in our development environment Read more ..

How to loop through each row of dataFrame in pyspark - Pyspark questions and answers

Different ways to iterate over rows in Pandas Dataframe

Python is a great language for doing data analysis, primarily because of the fantastic ecosystem of data-centric Python packages. Pandas is one of those packages and makes importing and analyzing data much easier.

Let’s see the Different ways to iterate over rows in Pandas Dataframe :

 Attention geek! Strengthen your foundations with the Python Programming Foundation Course and learn the basics.  

To begin with, your interview preparations Enhance your Data Structures concepts with the Python DS Course. And to begin with your Machine Learning Journey, join the Machine Learning - Basic Level Course

Method #1 : Using index attribute of the Dataframe .

Output: Given Dataframe : Name Age Stream Percentage 0 Ankit 21 Math 88 1 Amit 19 Commerce 92 2 Aishwarya 20 Arts 95 3 Priyanka 18 Biology 70 Iterating over rows using index attribute : Ankit Math Amit Commerce Aishwarya Arts Priyanka Biology

Method #2 : Using loc[] function of the Dataframe.

Output: Given Dataframe : Name Age Stream Percentage 0 Ankit 21 Math 88 1 Amit 19 Commerce 92 2 Aishwarya 20 Arts 95 3 Priyanka 18 Biology 70 Iterating over rows using loc function : Ankit 21 Amit 19 Aishwarya 20 Priyanka 18

Method #3 : Using iloc[] function of the DataFrame.

Output: Given Dataframe : Name Age Stream Percentage 0 Ankit 21 Math 88 1 Amit 19 Commerce 92 2 Aishwarya 20 Arts 95 3 Priyanka 18 Biology 70 Iterating over rows using iloc function : Ankit Math Amit Commerce Aishwarya Arts Priyanka Biology

Method #4 : Using iterrows() method of the Dataframe.

Output: Given Dataframe : Name Age Stream Percentage 0 Ankit 21 Math 88 1 Amit 19 Commerce 92 2 Aishwarya 20 Arts 95 3 Priyanka 18 Biology 70 Iterating over rows using iterrows() method : Ankit 21 Amit 19 Aishwarya 20 Priyanka 18

Method #5 : Using itertuples() method of the Dataframe.

Output: Given Dataframe : Name Age Stream Percentage 0 Ankit 21 Math 88 1 Amit 19 Commerce 92 2 Aishwarya 20 Arts 95 3 Priyanka 18 Biology 70 Iterating over rows using itertuples() method : Ankit 88 Amit 92 Aishwarya 95 Priyanka 70

Method #6 : Using apply() method of the Dataframe.

Output: Given Dataframe : Name Age Stream Percentage 0 Ankit 21 Math 88 1 Amit 19 Commerce 92 2 Aishwarya 20 Arts 95 3 Priyanka 18 Biology 70 Iterating over rows using apply function : 0 Ankit 88 1 Amit 92 2 Aishwarya 95 3 Priyanka 70 dtype: object

My Personal Notesarrow_drop_up

Row dataframe foreach

How to iterate over rows in a DataFrame in Pandas

How to iterate over rows in a DataFrame in Pandas?

Iteration in Pandas is an anti-pattern and is something you should only do when you have exhausted every other option. You should not use any function with "" in its name for more than a few thousand rows or you will have to get used to a lot of waiting.

Do you want to print a DataFrame? Use .

Do you want to compute something? In that case, search for methods in this order (list modified from here):

  1. Vectorization
  2. Cython routines
  3. List Comprehensions (vanilla loop)
  4. : i)  Reductions that can be performed in Cython, ii) Iteration in Python space
  5. and

and (both receiving many votes in answers to this question) should be used in very rare circumstances, such as generating row objects/nametuples for sequential processing, which is really the only thing these functions are useful for.

Appeal to Authority

The documentation page on iteration has a huge red warning box that says:

Iterating through pandas objects is generally slow. In many cases, iterating manually over the rows is not needed [...].

* It's actually a little more complicated than "don't". is the correct answer to this question, but "vectorize your ops" is the better one. I will concede that there are circumstances where iteration cannot be avoided (for example, some operations where the result depends on the value computed for the previous row). However, it takes some familiarity with the library to know when. If you're not sure whether you need an iterative solution, you probably don't. PS: To know more about my rationale for writing this answer, skip to the very bottom.

Faster than Looping: Vectorization, Cython

A good number of basic operations and computations are "vectorised" by pandas (either through NumPy, or through Cythonized functions). This includes arithmetic, comparisons, (most) reductions, reshaping (such as pivoting), joins, and groupby operations. Look through the documentation on Essential Basic Functionality to find a suitable vectorised method for your problem.

If none exists, feel free to write your own using custom Cython extensions.

Next Best Thing: List Comprehensions*

List comprehensions should be your next port of call if 1) there is no vectorized solution available, 2) performance is important, but not important enough to go through the hassle of cythonizing your code, and 3) you're trying to perform elementwise transformation on your code. There is a good amount of evidence to suggest that list comprehensions are sufficiently fast (and even sometimes faster) for many common Pandas tasks.

The formula is simple,

If you can encapsulate your business logic into a function, you can use a list comprehension that calls it. You can make arbitrarily complex things work through the simplicity and speed of raw Python code.


List comprehensions assume that your data is easy to work with - what that means is your data types are consistent and you don't have NaNs, but this cannot always be guaranteed.

  1. The first one is more obvious, but when dealing with NaNs, prefer in-built pandas methods if they exist (because they have much better corner-case handling logic), or ensure your business logic includes appropriate NaN handling logic.
  2. When dealing with mixed data types you should iterate over instead of as the latter implicitly upcasts data to the most common type. As an example if A is numeric and B is string, will cast the entire array to string, which may not be what you want. Fortunately ping your columns together is the most straightforward workaround to this.

*Your mileage may vary for the reasons outlined in the Caveats section above.

An Obvious Example

Let's demonstrate the difference with a simple example of adding two pandas columns . This is a vectorizable operaton, so it will be easy to contrast the performance of the methods discussed above.

Benchmarking code, for your reference. The line at the bottom measures a function written in numpandas, a style of Pandas that mixes heavily with NumPy to squeeze out maximum performance. Writing numpandas code should be avoided unless you know what you're doing. Stick to the API where you can (i.e., prefer over ).

I should mention, however, that it isn't always this cut and dry. Sometimes the answer to "what is the best method for an operation" is "it depends on your data". My advice is to test out different approaches on your data before settling on one.

My Personal Opinion *

Most of the analyses performed on the various alternatives to the iter family has been through the lens of performance. However, in most situations you will typically be working on a reasonably sized dataset (nothing beyond a few thousand or 100K rows) and performance will come second to simplicity/readability of the solution.

Here is my personal preference when selecting a method to use for a problem.

For the novice:

Vectorization (when possible); ; List Comprehensions; /; ; Cython

For the more experienced:

Vectorization (when possible); ; List Comprehensions; Cython; /;

Vectorization prevails as the most idiomatic method for any problem that can be vectorized. Always seek to vectorize! When in doubt, consult the docs, or look on Stack Overflow for an existing question on your particular task.

I do tend to go on about how bad is in a lot of my posts, but I do concede it is easier for a beginner to wrap their head around what it's doing. Additionally, there are quite a few use cases for has explained in this post of mine.

Cython ranks lower down on the list because it takes more time and effort to pull off correctly. You will usually never need to write code with pandas that demands this level of performance that even a list comprehension cannot satisfy.

* As with any personal opinion, please take with heaps of salt!

Further Reading

* Pandas string methods are "vectorized" in the sense that they are specified on the series but operate on each element. The underlying mechanisms are still iterative, because string operations are inherently hard to vectorize.

Why I Wrote this Answer

A common trend I notice from new users is to ask questions of the form "How can I iterate over my df to do X?". Showing code that calls while doing something inside a loop. Here is why. A new user to the library who has not been introduced to the concept of vectorization will likely envision the code that solves their problem as iterating over their data to do something. Not knowing how to iterate over a DataFrame, the first thing they do is Google it and end up here, at this question. They then see the accepted answer telling them how to, and they close their eyes and run this code without ever first questioning if iteration is not the right thing to do.

The aim of this answer is to help new users understand that iteration is not necessarily the solution to every problem, and that better, faster and more idiomatic solutions could exist, and that it is worth investing time in exploring them. I'm not trying to start a war of iteration vs. vectorization, but I want new users to be informed when developing solutions to their problems with this library.

Selecting Rows from a DataFrame based on Column Values in Python - One or More Conditions

Iterating over rows and columns in Pandas DataFrame

Iteration is a general term for taking each item of something, one after another. Pandas DataFrame consists of rows and columns so, in order to iterate over dataframe, we have to iterate a dataframe like a dictionary. In a dictionary, we iterate over the keys of the object in the same way we have to iterate in dataframe.

 Attention geek! Strengthen your foundations with the Python Programming Foundation Course and learn the basics.  

To begin with, your interview preparations Enhance your Data Structures concepts with the Python DS Course. And to begin with your Machine Learning Journey, join the Machine Learning - Basic Level Course

In this article, we are using “nba.csv” file to download the CSV, click here.
In Pandas Dataframe we can iterate an element in two ways: 

  • Iterating over rows
  • Iterating over columns 

Iterating over rows :

In order to iterate over rows, we can use three function iteritems(), iterrows(), itertuples() . These three function will help in iteration over rows.  

Iteration over rows using iterrows()

In order to iterate over rows, we apply a iterrows() function this function returns each index value along with a series containing the data in each row.

Code #1:





Now we apply iterrows() function in order to get a each element of rows. 






Code #2: 




Now we apply a iterrows to get each element of rows in dataframe 





Iteration over rows using iteritems()

In order to iterate over rows, we use iteritems() function this function iterates over each column as key, value pair with the label as key, and column value as a Series object.

Code #1:





Now we apply a iteritems() function in order to retrieve an rows of dataframe. 






Code #2: 





Now we apply a iteritems() in order to retrieve rows from a dataframe  





Iteration over rows using itertuples()

In order to iterate over rows, we apply a function itertuples() this function return a tuple for each row in the DataFrame. The first element of the tuple will be the row’s corresponding index value, while the remaining values are the row values.

Code #1:





Now we apply a itertuples() function inorder to get tuple for each row






Code #2: 




Now we apply an itertuples() to get atuple of each rows 





Iterating over Columns :

In order to iterate over columns, we need to create a list of dataframe columns and then iterating through that list to pull out the dataframe columns.

Code #1:





Now we iterate through columns in order to iterate through columns we first create a list of dataframe columns and then iterate through list. 





Code #2: 





Now we iterate over columns in CSV file in order to iterate over columns we create a list of dataframe columns and iterate over list 






You will also be interested:


Pandas is an immensely popular data manipulation framework for Python. In a lot of cases, you might want to iterate over data - either to print it out, or perform some operations on it.

In this tutorial, we'll take a look at how to iterate over rows in a Pandas .

If you're new to Pandas, you can read our beginner's tutorial. Once you're familiar, let's look at the three main ways to iterate over DataFrame:

    Iterating DataFrames with items()

    Let's set up a with some data of fictional people:

    Note that we are using id's as our 's index. Let's take a look at how the looks like:

    Now, to iterate over this , we'll use the function:

    This returns a generator:

    We can use this to generate pairs of and . These pairs will contain a column name and every row of data for that column. Let's loop through column names and their data:

    This results in:

    We've successfully iterated over all rows in each column. Notice that the index column stays the same over the iteration, as this is the associated index for the values. If you don't define an index, then Pandas will enumerate the index column accordingly.

    We can also print a particular row with passing index number to the as we do with Python lists:

    Note that list index are zero-indexed, so would refer to the second row. You will see this output:

    We can also pass the index value to .

    The output would be the same as before:

    Iterating DataFrames with iterrows()

    While iterates over the rows in column-wise, doing a cycle for each column, we can use to get the entire row-data of an index.

    Let's try iterating over the rows with :

    In the for loop, represents the index column (our DataFrame has indices from to ) and contains the data for that index in all columns. Our output would look like this:

    Likewise, we can iterate over the rows in a certain column. Simply passing the index number or the column name to the . For example, we can selectively print the first column of the row like this:

    Check out our hands-on, practical guide to learning Git, with best-practices, industry-accepted standards, and included cheat sheet. Stop Googling Git commands and actually learn it!


    They both produce this output:

    Iterating DataFrames with itertuples()

    The function will also return a generator, which generates row values in tuples. Let's try this out:

    You'll see this in your Python shell:

    The method has two arguments: and .

    We can choose not to display index column by setting the parameter to :

    Our tuples will no longer have the index displayed:

    As you've already noticed, this generator yields namedtuples with the default name of . We can change this by passing argument to the parameter. You can choose any name you like, but it's always best to pick names relevant to your data:

    Now our output would be:

    Iteration Performance with Pandas

    The official Pandas documentation warns that iteration is a slow process. If you're iterating over a to modify the data, vectorization would be a quicker alternative. Also, it's discouraged to modify data while iterating over rows as Pandas sometimes returns a copy of the data in the row and not its reference, which means that not all data will actually be changed.

    For small datasets you can use the method to display all the data. For larger datasets that have many columns and rows, you can use or methods to print out the first rows of your DataFrame (the default value for is 5).

    Speed Comparison

    To measure the speed of each particular method, we wrapped them into functions that would execute them for 1000 times and return the average time of execution.

    To test these methods, we will use both of the and functions to provide better comparison data and to cover common use cases. In order to decide a fair winner, we will iterate over DataFrame and use only 1 value to print or append per loop.

    Here's how the return values look like for each method:

    For example, while would cycle column by column:

    would provide all column data for a particular row:

    And finally, a single row for the would look like this:

    Here are the average results in seconds:

    MethodSpeed (s)Test Function
    MethodSpeed (s)Test Function

    Printing values will take more time and resource than appending in general and our examples are no exceptions. While performs better when combined with , method outperforms others dramatically when used for and remains the last for each comparison.

    Please note that these test results highly depend on other factors like OS, environment, computational resources, etc. The size of your data will also have an impact on your results.


    We've learned how to iterate over the DataFrame with three different Pandas methods - , , . Depending on your data and preferences you can use one of them in your projects.


    1619 1620 1621 1622 1623