Who has performed better, Left Handed or Right Handed Pitchers? Has this trend changed over time?


In order to determine if there is a difference in performance between Right hand pitchers and Left hand pitchers, we look at Historical Baseball Data available on the Internet. The specific source of data chosen here is a database of baseball statistics over the years 1870 to 2016. http://www.seanlahman.com/baseball-database.html

This database has 27 tables. However to obtain the answer for our query above, we need to cross reference data from 2 tables in this database. The Master.csv table lists every player that has played the game from 1870 to 2016, along with their year of birth . Its schema is listed below.

Table 1: Master Table Schema

Field Description
playerID A unique code asssigned to each player
birthYear Year player was born
birthMonth Month player was born
birthDay Day player was born
birthCount Country where player was born
birthState State where player was born
birthCity City where player was born
deathYear Year player died
deathMonth Month player died
deathDay Day player died
deathCount Country where player died
deathState State where player died
deathCity City where player died
nameFirst Player's first name
nameLast Player's last name
nameGiven Player's given name
weight Player's weight in pounds
height Player's height in inches
bats Player's batting hand (left, right)
throws Player's throwing hand (left or right)
debut Date that player made first appearance
finalGame Date that player made last appearance
retroID ID used by retrosheet
bbrefID ID used by Baseball Reference website

The Pitching.csv table lists the pitching statistics for every player, for every year that he played the game of baseball between 1870 and 2016. Its schema is listed below

Table 2 Pitching Table schema

Field Description
playerID A unique code asssigned to each player
yearID Year
stint players stint
teamID Team
lgID League
W Wins
L Losses
G Games Played
GS Games Started
CG Complete Games
SHO Shutout
SV Saves
IPOuts Outs Pitched
H Hits Allowed
ER Earned Runs
HR Home Runs Allowed
BB Walks
SO Strike Outs
BAOpp Opponents Batting Average
ERA Earned Run Average
IBB Intentional Walks
WP Wild Pitches
HBP Batters Hit By Pitches
BK Balks
BFP Batters Faced by Pitcher
GF Games Finished
R Runs Allowed
SH Sacrifices by Opp Batters
SF Sacrifice Flys by Opp Batters
GIDP Grounded into Double Plays

We Utilize Apache Spark to perform the required database operations to answer our questions. The Code below explains the process of answering these questions, and shows how easy it is to use Spark to analyze Big Data. The Code to implement this query is implemented in Python, and can either be run on a local server or a cluster of servers. The example below was run on an Amazon EC2 Free Tier Ubuntu Server instance. The EC2 instance was set up with Python (Anaconda 3-4.1.1), Java, Scala, py4j, Spark and Hadoop. The code was written and executed in a Jupyter Notebook. Several guides are available on the internet describing how to install and run spark on an EC2 instance. One that particularly covers all these facets is https://medium.com/@josemarcialportilla/getting-spark-python-and-jupyter-notebook-running-on-amazon-ec2-dec599e1c297

Pyspark Libraries

Import the pyspark libraries to allow python to interact with spark. A description of the basic functionality of each of these libaries is provided in the code comments below. A more detailed explanation of the functionality of each of these libraries can be found in Apache's documentation on Spark https://spark.apache.org/docs/latest/api/python/index.html

In [95]:
# Import SparkContext. This is the main entry point for Spark functionality
# Import Sparkconf. We use Spark Conf to easily change the configuration settings when changing between local mode cluster mode. 
# Import SQLContext from pyspark.sql. We use the libraries here to read in data in csv format. The format of our native database
# Import count, avg, round from pyspark.sql.functions. This is used for the math operations needed to answer our questions
# Import Window from pyspark.sql to allow us to effectively partition and analyze data

from pyspark import SparkContext, SparkConf
from pyspark.sql import SQLContext
from pyspark.sql.functions import count
from pyspark.sql.functions import avg
from pyspark.sql.functions import round
from pyspark.sql.functions import cume_dist


from pyspark.sql.window import Window

Pyspark Configuration & Instantiation

We configure spark for local mode or cluster mode, configure our application name, and configure logging. Several other configuration settings can be programmed as well. A detailed explanation of these can be found at https://spark.apache.org/docs/latest/configuration.html

We pass the configuration to an instance of a SparkContext object, so that we can begin using Apache Spark

In [96]:
# The Master will need to change when running on a cluster. 
# If we need to specify multiple cores we can list something like local[2] for 2 cores, or local[*] to use all available cores. 
# All the available Configuration settings can be found at https://spark.apache.org/docs/latest/configuration.html

sc_conf = SparkConf().setMaster('local[*]').setAppName('Question4').set('spark.logConf', True)
In [97]:
# We instantiate a SparkContext object with the SparkConfig

sc = SparkContext(conf=sc_conf)

Pyspark CSV file Processing

We use the SQLContext library to easily allow us to read the csv files 'Salaries.csv' and 'Teams.csv'. These files are currently stored in Amazon s3 storage (s3://cs498ccafinalproject/) and are publicly available for download. They were copied over to a local EC2 instance by using the AWS command line interace command

aws s3 cp s3://cs498ccafinalproject . --recursive

In [98]:
# We create a sql context object, so that we can read in csv files easily, and create a data frame
sqlContext = SQLContext(sc)

masterData = sqlContext.read.format('com.databricks.spark.csv').options(header='true', inferschema='true').load('Master.csv')
pitchingData = sqlContext.read.format('com.databricks.spark.csv').options(header='true', inferschema='true').load('Pitching.csv')

Pyspark Data Operations.

In order to look at the performance of Right Handed pitchers versus Left Handed pitchers, we perform the following operations in Spark

1) Merge the Master Table with the Pitching Table. This allows us to correspond pitching statistics for a player to whether he was right or left handed.

2) We clean the data to remove players who did not play enough games (7) in a season, and remove players whose pitching arm (left/right) was unknown.

3) We then query the table to return the Average ERA (Earned Run Average) , grouped by Year, and throwing arm, and ordered by Year.

This provides us with a large dataset of the Average ERA of all Left handed pitchers and the Average ERA for all Right handed pitchers, every year, from 1870 to 2016. When visualizing the data we see that the 2 data series, follow each other pretty closely. So there is no definitive answer as to who is better, in terms of Average ERA.

In [99]:
# Merge the two data frames
questionData = pitchingData.join(masterData, masterData.playerID == pitchingData.playerID, 'left')

# Remove pitchers that played less than 7 games in a season
questionData = questionData.filter(questionData.G >= 7)
questionData = questionData.filter(questionData.throws != "")

# Query the averge batting value for players by year

questionData = questionData.createOrReplaceTempView('questionData')


# Generate our query
sqlDF = sqlContext.sql('select yearID, throws, avg(ERA) as ERA from questionData group by yearID, throws order by yearID asc')

# Remove NA, INF
sqlDF = sqlDF.na.drop(subset=["throws"])

# Display results
sqlDF.show()

         
+------+------+------------------+
|yearID|throws|               ERA|
+------+------+------------------+
|  1871|     R| 4.239999999999999|
|  1871|     L| 6.140000000000001|
|  1872|     R|3.6408333333333336|
|  1873|     R| 3.158888888888889|
|  1874|     R|             3.222|
|  1875|     R|2.5777272727272726|
|  1875|     L|              3.98|
|  1876|     R| 2.484666666666667|
|  1877|     R|3.5036363636363634|
|  1877|     L|              3.51|
|  1878|     L|              2.14|
|  1878|     R|2.5336363636363637|
|  1879|     R|2.4484615384615385|
|  1879|     L|2.8949999999999996|
|  1880|     R|2.3553333333333333|
|  1880|     L|              3.02|
|  1881|     R| 2.877222222222222|
|  1881|     L|              4.33|
|  1882|     L|              3.03|
|  1882|     R| 2.932400000000001|
+------+------+------------------+
only showing top 20 rows

Additional Pyspark Data Operations.

We can also look at the Average Opponents batting Average against Left Handed Pitchers, and Right Handed Pitchers, to see if one group has consistently performed better than the other.

Again these 2 data series track each other pretty closely as well. So it is not possible to say whether Right Handed pitchers have been more dominant than Left Hander pitchers or vice versa.

In [100]:
# Generate our query
sqlDF2 = sqlContext.sql('select yearID, throws, avg(BAOpp) as BAOpp from questionData group by yearID, throws order by yearID asc')

# Remove NA, INF
sqlDF2 = sqlDF2.na.drop(subset=["throws"])

# Display results
sqlDF2 = sqlDF2.na.drop()

sqlDF2.show()
+------+------+-------------------+
|yearID|throws|              BAOpp|
+------+------+-------------------+
|  1876|     R| 0.2653333333333333|
|  1877|     R|0.28454545454545455|
|  1877|     L|               0.28|
|  1878|     R|0.26272727272727275|
|  1878|     L|               0.22|
|  1879|     R|0.24846153846153846|
|  1879|     L|              0.265|
|  1880|     R|0.23933333333333331|
|  1880|     L|               0.25|
|  1881|     R| 0.2544444444444445|
|  1881|     L|0.30500000000000005|
|  1882|     R|             0.2425|
|  1882|     L|                0.3|
|  1883|     R|0.26315789473684215|
|  1883|     L|               0.27|
|  1884|     R|0.24086956521739133|
|  1884|     L|               0.23|
|  1885|     L|0.21333333333333335|
|  1885|     R|0.24384615384615393|
|  1886|     R| 0.2529166666666667|
+------+------+-------------------+
only showing top 20 rows

Pyspark Test Results

We convert our spark data frames to pandas data frames, so it is easy to save them in a human readable csv format. These files contain the answers to the questions we posed.

In [101]:
# Examples to show how to print the results to an output file



pandas_sqlDF = sqlDF.toPandas()
pandas_sqlDF2 = sqlDF2.toPandas()

pandas_sqlDF.to_csv('spark_question4_ERA_right_vs_lefty_pitchers.csv')
pandas_sqlDF2.to_csv('spark_question4_BAOpp_right_vs_lefty_pitchers.csv')
In [102]:
sc.stop()