Introduction to pyspark

Introduction to pyspark DEFAULT

A Brief Introduction to PySpark

PySpark is a great language for performing exploratory data analysis at scale, building machine learning pipelines, and creating ETLs for a data platform. If you’re already familiar with Python and libraries such as Pandas, then PySpark is a great language to learn in order to create more scalable analyses and pipelines. The goal of this post is to show how to get up and running with PySpark and to perform common tasks.

We’ll use Databricks for a Spark environment, and the NHL dataset from Kaggle as a data source for analysis. This post shows how to read and write data into Spark dataframes, create transformations and aggregations of these frames, visualize results, and perform linear regression. I’ll also show how to mix regular Python code with PySpark in a scalable way, using Pandas UDFs. To keep things simple, we’ll focus on batch processing and avoid some of the complications that arise with streaming data pipelines.

The full notebook for this post is available on github.

Environment

There’s a number of different options for getting up and running with Spark:

  • Self Hosted: You can set up a cluster yourself using bare metal machines or virtual machines. Apache Ambari is a useful project for this option, but it’s not my recommended approach for getting up and running quickly.
  • Cloud Providers: Most cloud providers offer Spark clusters: AWS has EMR and GCP has DataProc. I’ve blogged about DataProc in the past, and you can get to an interactive environment quicker than self-hosting.
  • Vendor Solutions: Companies including Databricks and Cloudera provide Spark solutions, making it easy to get up and running with Spark.

The solution to use varies based on security, cost, and existing infrastructure. If you’re trying to get up and running with an environment to learn, then I would suggest using the Databricks Community Edition.

With this environment, it’s easy to get up and running with a Spark cluster and notebook environment. For this tutorial, I created a cluster with the Spark 2.4 runtime and Python 3. To run the code in this post, you’ll need at least Spark version 2.3 for the Pandas UDFs functionality.

Spark Dataframes

The key data type used in PySpark is the Spark dataframe. This object can be thought of as a table distributed across a cluster and has functionality that is similar to dataframes in R and Pandas. If you want to do distributed computation using PySpark, then you’ll need to perform operations on Spark dataframes, and not other python data types.

It is also possible to use Pandas dataframes when using Spark, by calling toPandas() on a Spark dataframe, which returns a pandas object. However, this function should generally be avoided except when working with small dataframes, because it pulls the entire object into memory on a single node.

One of the key differences between Pandas and Spark dataframes is eager versus lazy execution. In PySpark, operations are delayed until a result is actually needed in the pipeline. For example, you can specify operations for loading a data set from S3 and applying a number of transformations to the dataframe, but these operations won’t immediately be applied. Instead, a graph of transformations is recorded, and once the data is actually needed, for example when writing the results back to S3, then the transformations are applied as a single pipeline operation. This approach is used to avoid pulling the full data frame into memory and enables more effective processing across a cluster of machines. With Pandas dataframes, everything is pulled into memory, and every Pandas operation is immediately applied.

In general, it’s a best practice to avoid eager operations in Spark if possible, since it limits how much of your pipeline can be effectively distributed.

Reading Data

One of the first steps to learn when working with Spark is loading a data set into a dataframe. Once data has been loaded into a dataframe, you can apply transformations, perform analysis and modeling, create visualizations, and persist the results. In Python, you can load files directly from the local file system using Pandas:

import pandas as pd
pd.read_csv("dataset.csv")

In PySpark, loading a CSV file is a little more complicated. In a distributed environment, there is no local storage and therefore a distributed file system such as HDFS, Databricks file store (DBFS), or S3 needs to be used to specify the path of the file.

Generally, when using PySpark I work with data in S3. Many databases provide an unload to S3 function, and it’s also possible to use the AWS console to move files from your local machine to S3. For this post, I’ll use the Databricks file system (DBFS), which provides paths in the form of /FileStore. The first step is to upload the CSV file you’d like to process.

The next step is to read the CSV file into a Spark dataframe as shown below. This code snippet specifies the path of the CSV file, and passes a number of arguments to the read function to process the file. The last step displays a subset of the loaded dataframe, similar to in Pandas.

file_location = "/FileStore/tables/game_skater_stats.csv"df = spark.read.format("csv").option("inferSchema",
True).option("header", True).load(file_location)display(df)

I prefer using the parquet format when working with Spark, because it is a file format that includes metadata about the column data types, offers file compression, and is a file format that is designed to work well with Spark. AVRO is another format that works well with Spark. The snippet below shows how to take the dataframe from the past snippet and save it as a parquet file on DBFS, and then reload the dataframe from the saved parquet file.

df.write.save('/FileStore/parquet/game_skater_stats',
format='parquet')df = spark.read.load("/FileStore/parquet/game_skater_stats")
display(df)

The result of this step is the same, but the execution flow is significantly different. When reading CSV files into dataframes, Spark performs the operation in an eager mode, meaning that all of the data is loaded into memory before the next step begins execution, while a lazy approach is used when reading files in the parquet format. Generally, you want to avoid eager operations when working with Spark, and if I need to process large CSV files I’ll first transform the data set to parquet format before executing the rest of the pipeline.

Often you’ll need to process a large number of files, such as hundreds of parquet files located at a certain path or directory in DBFS. With Spark, you can include a wildcard in a path to process a collection of files. For example, you can load a batch of parquet files from S3 as follows:

df = spark.read .load("s3a://my_bucket/game_skater_stats/*.parquet")

This approach is useful if you have a seperate parquet file per day, or if there is a prior step in your pipeline that outputs hundreds of parquet files.

If you want to read data from a DataBase, such as Redshift, it’s a best practice to first unload the data to S3 before processing it with Spark. In Redshift, the unload command can be used to export data to S3 for processing:

unload ('select * from data_to_process')
to 's3://my_bucket/game_data'
iam_role 'arn:aws:iam::123:role/RedshiftExport';

There’s also libraries for databases, such as the spark-redshift, that make this process easier to perform.

Writing Data

Similar to reading data with Spark, it’s not recommended to write data to local storage when using PySpark. Instead, you should used a distributed file system such as S3 or HDFS. If you going to be processing the results with Spark, then parquet is a good format to use for saving data frames. The snippet below shows how to save a dataframe to DBFS and S3 as parquet.

# DBFS (Parquet)
df.write.save('/FileStore/parquet/game_stats',format='parquet')# S3 (Parquet)
df.write.parquet("s3a://my_bucket/game_stats", mode="overwrite")

When saving a dataframe in parquet format, it is often partitioned into multiple files, as shown in the image below.

If you need the results in a CSV file, then a slightly different output step is required. One of the main differences in this approach is that all of the data will be pulled to a single node before being output to CSV. This approach is recommended when you need to save a small dataframe and process it in a system outside of Spark. The snippet below shows how to save a dataframe as a single CSV file on DBFS and S3.

# DBFS (CSV)
df.write.save('/FileStore/parquet/game_stats.csv', format='csv')# S3 (CSV)
df.coalesce(1).write.format("com.databricks.spark.csv")
.option("header", "true").save("s3a://my_bucket/game_sstats.csv")

Another common output for Spark scripts is a NoSQL database such as Cassandra, DynamoDB, or Couchbase. This is outside the scope of this post, but one approach I’ve seen used in the past is writing a dataframe to S3, and then kicking off a loading process that tells the NoSQL system to load the data from the specified path on S3.

I’ve also omitted writing to a streaming output source, such as Kafka or Kinesis. These systems are more useful to use when using Spark Streaming.

Transforming Data

Many different types of operations can be performed on Spark dataframes, much like the wide variety of operations that can be applied on Pandas dataframes. One of the ways of performing operations on Spark dataframes is via Spark SQL, which enables dataframes to be queried as if they were tables. The snippet below shows how to find top scoring players in the data set.

df.createOrReplaceTempView("stats")display(spark.sql("""
select player_id, sum(1) as games, sum(goals) as goals
from stats
group by 1
order by 3 desc
limit 5
"""))

The result is a list of player IDs, number of game appearances, and total goals scored in these games. If we want to show the names of the players then we’d need to load an additional file, make it available as a temporary view, and then join it using Spark SQL.

In the snippet above, I’ve used the display command to output a sample of the data set, but it’s also possible to assign the results to another dataframe, which can be used in later steps in the pipeline. The code below shows how to perform these steps, where the first query results are assigned to a new dataframe which is then assigned to a temporary view and joined with a collection of player names.

top_players = spark.sql("""
select player_id, sum(1) as games, sum(goals) as goals
from stats
group by 1
order by 3 desc
limit 5
""")top_players.createOrReplaceTempView("top_players")
names.createOrReplaceTempView("names")display(spark.sql("""
select p.player_id, goals, _c1 as First, _c2 as Last
from top_players p
join names n
on p.player_id = n._c0
order by 2 desc
"""))

The result of this process is shown below, identifying Alex Ovechkin as a top scoring player in the NHL, based on the Kaggle data set.

There are Spark dataframe operations for common tasks such as adding new columns, dropping columns, performing joins, and calculating aggregate and analytics statistics, but when getting started it may be easier to perform these operations using Spark SQL. Also, it’s easier to port code from Python to PySpark if you’re already using libraries such as PandaSQL or framequery to manipulate Pandas dataframes using SQL.

Like most operations on Spark dataframes, Spark SQL operations are performed in a lazy execution mode, meaning that the SQL steps won’t be evaluated until a result is needed. Spark SQL provides a great way of digging into PySpark, without first needing to learn a new library for dataframes.

If you’re using Databricks, you can also create visualizations directly in a notebook, without explicitly using visualization libraries. For example, we can plot the average number of goals per game, using the Spark SQL code below.

display(spark.sql("""
select cast(substring(game_id, 1, 4) || '-'
|| substring(game_id, 5, 2) || '-01' as Date) as month
, sum(goals)/count(distinct game_id) as goals_per_goal
from stats
group by 1
order by 1
"""))

The initial output displayed in the Databricks notebook is a table of results, but we can use the plot functionality to transform the output into different visualizations, such as the bar chart shown below. This approach doesn’t support every visualization that a data scientist may need, but it does make it much easier to perform exploratory data analysis in Spark. If needed, we can use the toPandas() function to create a Pandas dataframe on the driver node, which means that any Python plotting library can be used for visualizing the results. However, this approach should be used for only small dataframes, since all of the data is eagerly fetched into memory on the driver node.

I also looked at average goals per shot, for players with at least 5 goals.

display(spark.sql("""
select cast(goals/shots * 50 as int)/50.0 as Goals_per_shot
,sum(1) as Players
from (
select player_id, sum(shots) as shots, sum(goals) as goals
from stats
group by 1
having goals >= 5
)
group by 1
order by 1
"""))

The results for this transformation are shown in the chart below. Most of the players with at least 5 goals complete shots about 4% to 12% of the time.

MLlib

One of the common use cases of Python for data scientists is building predictive models. While scikit-learn is great when working with pandas, it doesn’t scale to large data sets in a distributed environment (although there are ways for it to be parallelized with Spark). When building predictive models with PySpark and massive data sets, MLlib is the preferred library because it natively operates on Spark dataframes. Not every algorithm in scikit-learn is available in MLlib, but there is a wide variety of options covering many use cases.

In order to use one of the supervised algorithms in MLib, you need to set up your dataframe with a vector of features and a label as a scalar. Once prepared, you can use the fit function to train the model. The snippet below shows how to combine several of the columns in the dataframe into a single features vector using a VectorAssembler. We use the resulting dataframe to call the fit function and then generate summary statistics for the model.

# MLlib imports
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.regression import LinearRegression# Create a vector representation for features
assembler = VectorAssembler(inputCols=['shots', 'hits', 'assists',
'penaltyMinutes','timeOnIce','takeaways'],outputCol="features")
train_df = assembler.transform(df)# Fit a linear regression model
lr = LinearRegression(featuresCol = 'features', labelCol='goals')
lr_model = lr.fit(train_df)# Output statistics
trainingSummary = lr_model.summary
print("Coefficients: " + str(lr_model.coefficients))
print("RMSE: %f" % trainingSummary.rootMeanSquaredError)
print("R2: %f" % trainingSummary.r2)

The model predicts how many goals a player will score based on the number of shots, time in game, and other factors. However, the performance of this model is poor, it results in a root mean-squared error (RMSE) of 0.375 and an R-squared value of 0.125. The coefficient with the largest value was the shots column, but this did not provide enough signal for the model to be accurate.

There’s a number of additional steps to consider when build an ML pipeline with PySpark, including training and testing data sets, hyperparameter tuning, and model storage. The snippet above is simply a starting point for getting started with MLlib.

Pandas UDFs

One of the features in Spark that I’ve been using more recently is Pandas user-defined functions (UDFs), which enable you to perform distributed computing with Pandas dataframes within a Spark environment. The general way that these UDFs work is that you first partition a Spark dataframe using a groupby statement, and each partition is sent to a worker node and translated into a Pandas dataframe that gets passed to the UDF. The UDF then returns a transformed Pandas dataframe which is combined with all of the other partitions and then translated back to a Spark dataframe. The end result is really useful, you can use Python libraries that require Pandas but can now scale to massive data sets, as long as you have a good way of partitioning your dataframe. Pandas UDFs were introduced in Spark 2.3, and I’ll be talking about how we use this functionality at Zynga during Spark Summit 2019.

Curve fitting is a common task that I perform as a data scientist. The code snippet below shows how to perform curve fitting to describe the relationship between the number of shots and hits that a player records during the course of a game. The snippet shows how we can perform this task for a single player by calling toPandas() on a data set filtered to a single player. The output of this step is two parameters (linear regression coefficients) that attempt to describe the relationship between these variables.

# Sample data for a player
sample_pd = spark.sql("""
select * from stats
where player_id = 8471214
""").toPandas()# Import python libraries
from scipy.optimize import leastsq
import numpy as np# Define a function to fit
def fit(params, x, y):
return (y - (params[0] + x * params[1] ))# Fit the curve and show the results
result = leastsq(fit, [1, 0],
args=(sample_pd.shots, sample_pd.hits))
print(result)

If we want to calculate this curve for every player and have a massive data set, then the toPandas() call will fail due to an out of memory exception. We can scale this operation to the entire data set by calling groupby() on the player_id, and then applying the Pandas UDF shown below. The function takes as input a Pandas dataframe that describes the gameplay statistics of a single player, and returns a summary dataframe that includes the player_id and fitted coefficients. Each of the summary Pandas dataframes are then combined into a Spark dataframe that is displayed at the end of the code snippet. One additional piece of setup for using Pandas UDFs is defining the schema for the resulting dataframe, where the schema describes the format of the Spark dataframe generated from the apply step.

# Load necessary libraries
from pyspark.sql.functions import pandas_udf, PandasUDFType
from pyspark.sql.types import *
import pandas as pd# Create the schema for the resulting data frame
schema = StructType([StructField('ID', LongType(), True),
StructField('p0', DoubleType(), True),
StructField('p1', DoubleType(), True)])# Define the UDF, input and outputs are Pandas DFs
@pandas_udf(schema, PandasUDFType.GROUPED_MAP)
def analyze_player(sample_pd): # return empty params in not enough data
if (len(sample_pd.shots) <= 1):
return pd.DataFrame({'ID': [sample_pd.player_id[0]],
'p0': [ 0 ], 'p1': [ 0 ]})

# Perform curve fitting
result = leastsq(fit, [1, 0], args=(sample_pd.shots,
sample_pd.hits)) # Return the parameters as a Pandas DF
return pd.DataFrame({'ID': [sample_pd.player_id[0]],
'p0': [result[0][0]], 'p1': [result[0][1]]})# perform the UDF and show the results
player_df = df.groupby('player_id').apply(analyze_player)
display(player_df)

The output of this process is shown below. We now have a dataframe that summarizes the curve fit per player, and can run this operation on a massive data set. When working with huge data sets, it’s important to choose or generate a partition key to achieve a good tradeoff between the number and size of data partitions.

Best Practices

I’ve covered some of the common tasks for using PySpark, but also wanted to provide some advice on making it easier to take the step from Python to PySpark. Here are some of the best practices I’ve collected based on my experience porting a few projects between these environments:

  • Avoid dictionaries, use dataframes: using Python data types such as dictionaries means that the code might not be executable in a distributed mode. Instead of using keys to index values in a dictionary, consider adding another column to a dataframe that can be used as a filter.
  • Use toPandas sparingly: Calling toPandas() will cause all data to be loaded into memory on the driver node, and prevents operations from being performed in a distributed mode. It’s fine to use this function when data has already been aggregated and you want to make use of familiar Python plotting tools, but it should not be used for large dataframes.
  • Avoid for loops: If possible, it’s preferred to rewrite for-loop logic using the groupby-apply pattern to support parallelized code execution. I’ve noticed that focusing on using this pattern in Python has also resulted in cleaning code that is easier to translate to PySpark.
  • Try minimizing eager operations: In order for your pipeline to be as scalable as possible, it’s good to avoid eager operations that pull full dataframes into memory. I’ve noticed that reading in CSVs is an eager operation, and my work around is to save the dataframe as parquet and then reload it from parquet to build more scalable pipelines.
  • Use framequery/pandasql to make porting easier: If you’re working with someone else’s Python code, it can be tricky to decipher what some of the Pandas operations are achieving. If you plan on porting your code from Python to PySpark, then using a SQL library for Pandas can make this translation easier.

I’ve found that spending time writing code in PySpark has also improved by Python coding skills.

Conclusion

PySpark is a great language for data scientists to learn because it enables scalable analysis and ML pipelines. If you’re already familiar with Python and Pandas, then much of your knowledge can be applied to Spark. I’ve shown how to perform some common operations with PySpark to bootstrap the learning process. I also showed off some recent Spark functionality with Pandas UDFs that enable Python code to be executed in a distributed mode. There’s great environments that make it easy to get up and running with a Spark cluster, making now a great time to learn PySpark!

Sours: https://towardsdatascience.com/a-brief-introduction-to-pyspark-ff4284701873

Beginners Guide to PySpark

PySpark Tutorial

PySpark is an API of Apache Spark which is an open-source, distributed processing system used for big data processing which was originally developed in Scala programming language at UC Berkely. The Spark has development APIs in Scala, Java, Python, and R, and supports code reuse across multiple workloads — batch processing, interactive queries, real-time analytics, machine learning, and graph processing. It utilizes in-memory caching, and optimized query execution for fast analytic queries against data of any size. It does not have its own file system like Hadoop HDFS, it supports most of all popular file systems like Hadoop Distributed File System (HDFS), HBase, Cassandra, Amazon S3, Amazon Redshift, Couchbase, e.t.c.

The Advantages of using Apache Spark:

  • It runs programs up to 100x faster than Hadoop MapReduce in memory, or 10x faster on disk.It claims because it does the processing in the main memory of the worker nodes and prevents unnecessary I/O operations.
  • It is user-friendly as it has APIs written in popular languages which makes it easy for your developers because they hide the complexity of distributed processing behind simple, high-level operators that dramatically lowers the amount of code required.
  • It Can be deployed through Mesos, Hadoop via Yarn, or Spark’s own cluster manager.
  • Real-time computation and low latency because of in-memory computation.

In this article we see

  1. Setting Environment in Google Colab
  2. Spark Session
  3. Reading Data
  4. Structuring Data Using Spark Schema
  5. Different Methods to Inspect Data
  6. Column Manipulation
  7. Dealing with Missing Values
  8. Querying Data
  9. Data Visualization
  10. Write/Save Data to File
  11. Conclusion and
  12. References

Let’s Get Started ✨

To run pyspark on a local machine we need Java and other software. So instead of the heavy installation procedure, we use Google Colaboratory which has better hardware specifications and also comes with a wide range of libraries for Data Science and Machine Learning. We need to install pyspark and Py4J packages. The Py4J enables Python programs running in a python interpreter to dynamically access Java objects in a Java Virtual Machine. The command to install the above-said packages is

SparkSession has become an entry point to PySpark since version 2.0 earlier the SparkContext is used as an entry point. The SparkSession is an entry point to underlying PySpark functionality to programmatically create PySpark RDD, DataFrame, and Dataset. It can be used in replace with SQLContext, HiveContext, and other contexts defined before 2.0. You should also know that SparkSession internally creates SparkConfig and SparkContext with the configuration provided with SparkSession. SparkSession can be created using builder patterns.

Creating SparkSession

To create a SparkSession, you need to use the builder pattern method

  • — the method returns an already existing SparkSession; if not exists, it creates a new SparkSession.
  • – If you are running it on the cluster you need to use your master name as an argument. usually, it would be either or depends on your cluster setup and also uses when running in Standalone mode. should be an integer value and should be greater than 0 which represents how many partitions it should create when using RDD, DataFrame, and Dataset. Ideally, the value should be the number of CPU cores.
  • the method is used to set the name of your application.
  • the method returns an existing SparkSession if it exists otherwise it creates a new SparkSession.

The example to create a SparkSession

The pyspark can read data from various file formats such as Comma Separated Values (CSV), JavaScript Object Notation (JSON), Parquet, e.t.c. To read different file formats we use . Here are the examples to read data from different file formats:

Let’s read the U.S Stock Price data from January 2019 to July 2020 which is available in Kaggle datasets.

The code to read the data which is in CSV file format.

Let’s see the schema of the data using method.

Spark schema is the structure of the DataFrame or Dataset, we can define it using StructType class which is a collection of StructField that defines the column name(String), column type (DataType), nullable column (Boolean), and metadata (MetaData). spark infers the schema from data however some times the inferred datatype may not be correct or we may need to define our own column names and data types, especially while working with unstructured and semi-structured data.

Let’s see how we can use this to structure our data:

The above code shows how to create structure using and . Then pass the created structure to the schema parameter while reading the data using . Let’s see the schema of the structured data:

There are various methods used to inspect the data. They are schema, dtypes, show, head, first, take, describe, columns, count, distinct, printSchema. Let’s see the explanation of their methods with an example.

  • schema(): This method returns the schema of the data(dataframe). The below example w.r.t US StockPrice data is shown.
  • dtypes: It returns a list of tuples with column names and it’s data types.
  • head(n): It returns n rows as a list. Here is an example:
  • show(): It displays the first 20 rows by default and it also takes a number as a parameter to display the number of rows of the data. Here is an example: show(5).
  • first(): It returns the first row of the data.
  • take(n): It returns the first n rows of the data.
  • describe(): It computes the summary statistics of the columns with the numeric data type.
  • columns: It returns a list that contains the column names of the data.
  • count(): It returns the count of the number of rows in the data.
  • distinct(): It returns the number of distinct rows in the data.
  • printSchema(): It displays the schema of the data.

Let’s see different methods that are used to add, update, delete columns of the data.

  1. Adding Column: Use the method takes two parameters column name and data to add a new column to the existing data. See the below example:

2. Update column: Use which takes to parameters existing column name and new column name to rename the existing column. See the below example:

3. Delete Column: Use the method which takes the column name and returns the data.

We often encounter missing values while dealing with real-time data. These missing values are encoded as NaN, Blanks, and placeholders. There are various techniques to deal with missing values some of the popular ones are:

  • Remove: Remove the rows having missing values in any one of the columns.
  • Impute with Mean/Median: Replace the missing values using the Mean/Median of the respective column. It’s easy, fast, and works well with small numeric datasets.
  • Impute with Most Frequent Values: As the name suggests use the most frequent value in the column to replace the missing value of that column. This works well with categorical features and may also introduce bias into the data.
  • Impute using KNN: K-Nearest Neighbors is a classification algorithm that uses feature similarity using different distance metrics such as Euclidean, Mahalanobis, Manhattan, Minkowski, and Hamming e.t.c. for any new data points. This is very efficient compared to the above-mentioned methods to impute missing values depending on the dataset and it is computationally expensive and sensitive to outliers.

Let’s see how we can use PySpark to deal with missing values:

The PySpark and PySpark SQL provide a wide range of methods and functions to query the data at ease. Here are the few most used methods:

  • Select
  • Filter
  • Between
  • When
  • Like
  • GroupBy
  • Aggregations

Select

It is used to select single or multiple columns using the names of the columns. Here is a simple example:

Filter

Filter the data based on the given condition, you can also give multiple conditions using AND(&), OR(|), and NOT(~) operators. Here is the example to fetch the data of January 2020 stock prices.

Between

This method returns either True or False if the passed values in the between method. Let’s see an example to fetch the data where the adjusted value is between 100 and 500.

When

It returns 0 or 1 depending on the given condition, the below example shows how to select the opening and closing price of stocks when the adjusted price is greater than equals to 200.

Like

It is similar to the like operator in SQL, The below example show to extract the sector names which stars with either M or C using ‘rlike’.

GourpBy

The name itself explains that it groups the data by the given column name and it can perform different operations such as sum, mean, min, max, e.t.c. The below example explains how to get the average opening, closing, and adjusted stock price concerning industries.

Aggregation

PySpark provides built-in standard Aggregate functions defines in DataFrame API, these come in handy when we need to make aggregate operations on columns of the data. Aggregate functions operate on a group of rows and calculate a single return value for every group. The below example shows how to display the minimum, maximum, and average; opening, closing, and adjusted stock prices from January 2019 to January 2020 concerning the sectors.

We are going to utilize matplotlib and pandas to visualize data, the toPandas() method used to convert the data into pandas dataframe. Using the dataframe we utilize the plot() method to visualize data. The below code shows how to display a bar graph for the average opening, closing, and adjusted stock price concerning the sector.

Similarly, let’s visualize the average opening, closing, and adjusted price concerning industries.

Let’s see the time-series graph of the technology sector average opening, closing, and Adjusted stock price.

Sours: https://towardsdatascience.com/beginners-guide-to-pyspark-bbe3b553b79f
  1. 150cc gy6 swap
  2. Fire sepulveda
  3. Gpio interrupts
  4. Clam spring bobber
  5. Nightcore online

PySpark - Introduction



In this chapter, we will get ourselves acquainted with what Apache Spark is and how was PySpark developed.

Spark – Overview

Apache Spark is a lightning fast real-time processing framework. It does in-memory computations to analyze data in real-time. It came into picture as Apache Hadoop MapReduce was performing batch processing only and lacked a real-time processing feature. Hence, Apache Spark was introduced as it can perform stream processing in real-time and can also take care of batch processing.

Apart from real-time and batch processing, Apache Spark supports interactive queries and iterative algorithms also. Apache Spark has its own cluster manager, where it can host its application. It leverages Apache Hadoop for both storage and processing. It uses HDFS (Hadoop Distributed File system) for storage and it can run Spark applications on YARN as well.

PySpark – Overview

Apache Spark is written in Scala programming language. To support Python with Spark, Apache Spark Community released a tool, PySpark. Using PySpark, you can work with RDDs in Python programming language also. It is because of a library called Py4j that they are able to achieve this.

PySpark offers PySpark Shell which links the Python API to the spark core and initializes the Spark context. Majority of data scientists and analytics experts today use Python because of its rich library set. Integrating Python with Spark is a boon to them.

Sours: https://www.tutorialspoint.com/pyspark/pyspark_introduction.htm
Introduction to PySpark using AWS \u0026 Databricks

The girl began to scream. Sashka went up to Alina, grabbed her in an armful and carried her to his bedroom. The girl cried and struggled out of his mighty arms. The skirt lifted strongly and I could see a piece of her innocent white panties. I just lost my head from this sight.

Pyspark introduction to

How much was raped, fucked like a piece of meat. An hour, two, three. Or she was beaten until the morning. As a result, after the abuse, she was simply thrown out on the road on her way to the city. The guy's.

PySpark Tutorial - PySpark Tutorial For Beginners - Apache Spark With Python Tutorial - Simplilearn

She spread her legs slightly to the sides. The back bent slightly, pulling up a strong butt. Mashka, who was standing nearby, guessed to take the man's spear and direct it into the wet crack. Alexey gave a push. His friend slipped into the juicy depths.

Similar news:

So, it did not exist. I was looking for a real companion, - I confidently make excuses, but in response I hear only a laugh. I was splashing sperm right and left, and not looking, - this strict lady appeals to my excuses. - How many of these companions were that. Good question.



107 108 109 110 111