How has the Global Representation of Baseball Players changed over time? What countries produce the most baseball players in number? What countries have showed the highest increase and Highest Decline in players in the last 15 years.


In order to determine how the global representation of MLB Players has changed from 1870 to 2016, we look at Historical Baseball Data available on the Internet. The specific source of data chosen here is a database of baseball statistics over the years 1870 to 2016. http://www.seanlahman.com/baseball-database.html

This database has 27 tables. However to obtain the answer for our query above, we need to cross reference data from 2 tables in this database. The Master.csv table lists every player that has played the game from 1870 to 2016, along with their country of origin. Its schema is listed below.

Table 1: Master Table Schema

Field Description
playerID A unique code asssigned to each player
birthYear Year player was born
birthMonth Month player was born
birthDay Day player was born
birthCount Country where player was born
birthState State where player was born
birthCity City where player was born
deathYear Year player died
deathMonth Month player died
deathDay Day player died
deathCount Country where player died
deathState State where player died
deathCity City where player died
nameFirst Player's first name
nameLast Player's last name
nameGiven Player's given name
weight Player's weight in pounds
height Player's height in inches
bats Player's batting hand (left, right)
throws Player's throwing hand (left or right)
debut Date that player made first appearance
finalGame Date that player made last appearance
retroID ID used by retrosheet
bbrefID ID used by Baseball Reference website

The Fielding.csv table lists the Fielding statistics for every player, who has played the game of baseball from 1870 to 2016, along with the year those statistics were recorded. Its schema is listed below

Table 2 Fielding Table schema

Field Description
playerID A unique code asssigned to each player
yearID Year
stint players stint
teamID Team
lgID League
Pos Position
G Games
GS Games Started
InnOuts Time Played (As Outs)
PO PutOuts
A Assists
E Errors
DP Double Plays
PB Passed Balls (Catcher)
WP Wild Pitches (Catcher)
SB Opponent Stolen Bases
CS Opponent Caught Stealing
ZR Zone Rating

We Utilize Apache Spark to perform the required database operations to answer our questions. The Code below explains the process of answering these questions, and shows how easy it is to use Spark to analyze Big Data. The Code to implement this query is implemented in Python, and can either be run on a local server or a cluster of servers. The example below was run on an Amazon EC2 Free Tier Ubuntu Server instance. The EC2 instance was set up with Python (Anaconda 3-4.1.1), Java, Scala, py4j, Spark and Hadoop. The code was written and executed in a Jupyter Notebook. Several guides are available on the internet describing how to install and run spark on an EC2 instance. One that particularly covers all these facets is https://medium.com/@josemarcialportilla/getting-spark-python-and-jupyter-notebook-running-on-amazon-ec2-dec599e1c297

Pyspark Libraries

Import the pyspark libraries to allow python to interact with spark. A description of the basic functionality of each of these libaries is provided in the code comments below. A more detailed explanation of the functionality of each of these libraries can be found in Apache's documentation on Spark https://spark.apache.org/docs/latest/api/python/index.html

In [1]:
# Import SparkContext. This is the main entry point for Spark functionality
# Import Sparkconf. We use Spark Conf to easily change the configuration settings when changing between local mode cluster mode. 
# Import SQLContext from pyspark.sql. We use the libraries here to read in data in csv format. The format of our native database
# Import count from pyspark.sql.functions. This is used for the count operations needed to answer our questions


from pyspark import SparkContext, SparkConf
from pyspark.sql import SQLContext
from pyspark.sql.functions import count

Pyspark Configuration & Instantiation

We configure spark for local mode or cluster mode, configure our application name, and configure logging. Several other configuration settings can be programmed as well. A detailed explanation of these can be found at https://spark.apache.org/docs/latest/configuration.html

We pass the configuration to an instance of a SparkContext object, so that we can begin using Apache Spark

In [2]:
# The Master will need to change when running on a cluster. 
# If we need to specify multiple cores we can list something like local[2] for 2 cores, or local[*] to use all available cores. 
# All the available Configuration settings can be found at https://spark.apache.org/docs/latest/configuration.html

sc_conf = SparkConf().setMaster('local[*]').setAppName('Question1').set('spark.logConf', True)
In [3]:
# We instantiate a SparkContext object with the SparkConfig

sc = SparkContext(conf=sc_conf)

Pyspark CSV file Processing

We use the SQLContext library to easily allow us to read the csv files 'Master.csv' and 'Fielding.csv'. These files are currently stored in Amazon s3 storage (s3://cs498ccafinalproject/) and are publicly available for download. They were copied over to a local EC2 instance by using the AWS command line interace command

aws s3 cp s3://cs498ccafinalproject . --recursive

In [4]:
# We create a sql context object, so that we can read in csv files easily, and create a data frame
sqlContext = SQLContext(sc)

df_master = sqlContext.read.format('com.databricks.spark.csv').options(header='true', inferschema='true').load('Master.csv')
df_field = sqlContext.read.format('com.databricks.spark.csv').options(header='true', inferschema='true').load('Fielding.csv')

Pyspark Data Operations to Determine how the Global Representation of Baseball players have changed from 1870 to 2016

In order to determine how the Global representation of Major League Baseball players has changed over time, we perform the following operations

1) We perform an innner join on the Fielding.csv and Master.csv tables, using the playerID as a unique key.

2) We select only the columns that we need (playerID, birthCountry and yearID) to answer our question

3) We drop duplicate entries in the joined table. These can arise from players who played on multiple teams in the same year, or players who were called up to the majors, and dropped down to the minors multiple times a year.

4) We clean the database to remove any Null entries, for when the players country of orgin was unknown. This is especially common for the years between 1870 and 1912

5) We group the cleaned data by yearID and birthCountry, then perform an aggregation operation to determine the count.

6) We then sort the data by yearID

This gives us a dataframe that lists the number of players born in a specific country, for every year from 1870 to 2016.

In [5]:
# Join the two tables, and filter the colums we need. 
# Remove duplicates
# Clean Null Entries
# Group by yearID and BirthCountry, then aggregate by Count
# Sort the final results by yearID 


keep = [df_field.playerID, df_field.yearID, df_master.birthCountry ]
df_merge = df_field.join(df_master, df_field.playerID==df_master.playerID, 'inner').select(*keep).dropDuplicates()
df_clean = df_merge.filter(df_merge.birthCountry != "")
df_final = df_clean.groupBy(df_clean.yearID, df_clean.birthCountry).\
    agg(count("*")).\
    orderBy(df_clean.yearID)

df_final.show()

            
+------+--------------+--------+
|yearID|  birthCountry|count(1)|
+------+--------------+--------+
|  1871|United Kingdom|       5|
|  1871|          Cuba|       1|
|  1871|   Netherlands|       1|
|  1871|       Ireland|       4|
|  1871|           USA|     101|
|  1871|       Germany|       1|
|  1871|           CAN|       1|
|  1872|       Germany|       4|
|  1872|United Kingdom|       6|
|  1872|       Ireland|       2|
|  1872|           USA|     122|
|  1872|   Netherlands|       1|
|  1872|          Cuba|       1|
|  1873|          Cuba|       1|
|  1873|           USA|     108|
|  1873|United Kingdom|       4|
|  1873|   Netherlands|       1|
|  1873|       Ireland|       4|
|  1873|       Germany|       1|
|  1873|           CAN|       1|
+------+--------------+--------+
only showing top 20 rows

Pyspark Additional Statistics

To put our data into context, we can also look up the following information

1) How many people have played in major league baseball from 1870 to 2016

2) How many unique countries have been represented by players in Major League Baseball from 1870 to 2016

3) How many people played Major League Baseball in the Year 2016

4) How many unique countries were represented by players in Major League Baseball from 1870 to 2016

In [6]:
# Additional Examples showing how to get additional statistics
# Number of players in MLB from 1870 to 2016. 
# Answer: 19105

df_master.count()
Out[6]:
19105
In [7]:
# Additional Examples showing how to get additional statistics
# Number of Unique Countries that have had players in MLB from 1870 to 2016 
# Answer: 53

df_clean.select(df_clean.birthCountry).distinct().count()
Out[7]:
53
In [8]:
# Additional Examples showing how to get additional statistics
# Number of MLB Players in 2016
# Answer: 1343

df_merge.filter(df_merge.yearID==2016).count()
Out[8]:
1343
In [9]:
# Additional Examples showing how to get additional statistics
# Number of Countries represented in 2016
# Answer: 22
df_merge.filter(df_merge.yearID==2016).groupBy(df_merge.birthCountry).agg(count("*")).count()
Out[9]:
22

Pyspark Data Operations to show what Countries produce the most Major League Baseball players and which countries have shown the greatest increase and greatest decline in Major league players between 2001 and 2016.

To determine which countries have produced the most baseball players in 2016, we slice the dataframe we obtained to determine global representation of players, for the year 2016. We can additionally look at a slice of this dataframe from 2001. If we join the two slices, and compute the differnce between players represented in 2016 and 2001, we can determine the corresponding percentage increase/decrease, as well as get a snapshot of which teams produce the most baseball players.

From the Data it is obvious that USA produces the most players. It has 967 players in 2016 and 899 players in 2011. The Dominican Republic and Venezuela also had large representations with 134 and 102 players respectively.

In terms of a statistically significant increase in players, Venezuela saw a 104% increase in players (50 to 102) represented from 2001 to 2016. Puerto Rico surprisingly showed a 51% decrease in players (53 to 26) represented from 2001 to 2016.

In [10]:
# Additional Examples showing how to get additional statistics
# Highest growth and Highest Decline in the Last 15 years
# Answer: 
# Significant Increase - Venezuela (104%) from 50 to 102
# Significant Decrease - Puerto Rico (-51%) from 53 to 26
# Percentage Increase - Germany (300%) from 1 to 4. [Not Statistically significant]  
# Percentage Decrease - Aruba (-67%) from 3 to 1. [Not Statistically significant]


df_2001 = df_final.filter(df_final.yearID==2001).withColumnRenamed('count(1)', 'countNum2001').\
    withColumnRenamed('birthCountry', 'country2001' )
df_2016 = df_final.filter(df_final.yearID==2016).withColumnRenamed('count(1)', 'countNum2016').\
    withColumnRenamed('birthCountry', 'country2016' )
    



df_change = df_2016.join(df_2001, df_2016.country2016==df_2001.country2001, 'inner').\
    withColumn("diff", df_2016.countNum2016-df_2001.countNum2001)
    
df_perc_change = df_change.withColumn("percentChange", (df_change.diff/df_change.countNum2001)*100)

df_perc_change.show()
+------+-----------+------------+------+-----------+------------+----+-------------------+
|yearID|country2016|countNum2016|yearID|country2001|countNum2001|diff|      percentChange|
+------+-----------+------------+------+-----------+------------+----+-------------------+
|  2016|    Germany|           4|  2001|    Germany|           1|   3|              300.0|
|  2016|       D.R.|         134|  2001|       D.R.|         109|  25|  22.93577981651376|
|  2016|  Nicaragua|           3|  2001|  Nicaragua|           2|   1|               50.0|
|  2016|    Curacao|           4|  2001|    Curacao|           2|   2|              100.0|
|  2016|       Cuba|          30|  2001|       Cuba|          15|  15|              100.0|
|  2016|     Panama|           6|  2001|     Panama|          10|  -4|              -40.0|
|  2016|  Venezuela|         102|  2001|  Venezuela|          50|  52|              104.0|
|  2016|        USA|         967|  2001|        USA|         899|  68|  7.563959955506118|
|  2016|South Korea|           9|  2001|South Korea|           3|   6|              200.0|
|  2016|     Mexico|          15|  2001|     Mexico|          17|  -2| -11.76470588235294|
|  2016|      Aruba|           1|  2001|      Aruba|           3|  -2| -66.66666666666666|
|  2016|       P.R.|          26|  2001|       P.R.|          53| -27|-50.943396226415096|
|  2016|        CAN|          13|  2001|        CAN|          13|   0|                0.0|
|  2016|       V.I.|           2|  2001|       V.I.|           2|   0|                0.0|
|  2016|      Japan|           9|  2001|      Japan|          11|  -2|-18.181818181818183|
|  2016|  Australia|           4|  2001|  Australia|           6|  -2| -33.33333333333333|
|  2016|   Colombia|           6|  2001|   Colombia|           3|   3|              100.0|
+------+-----------+------------+------+-----------+------------+----+-------------------+

Pyspark Data Operations to Track the Change in Player Representation for Different Countries since 2001

We can also slice the Dataframe to look at the number of players represented, from all countries, over a specific time period. We can do this to track the global growth of the sport over a specific time period. The example below extracts the number of players by country after the year 2000. We can later plot this to determine trends among different countries.

In [11]:
df_last_15 = df_final.filter(df_final.yearID>2000).\
    withColumnRenamed('count(1)', 'count')
    
df_last_15.show()
+------+------------+-----+
|yearID|birthCountry|count|
+------+------------+-----+
|  2001|       Japan|   11|
|  2001|        Cuba|   15|
|  2001|   Australia|    6|
|  2001|      Mexico|   17|
|  2001|   Venezuela|   50|
|  2001|   Singapore|    1|
|  2001|    Viet Nam|    1|
|  2001|    Colombia|    3|
|  2001|     Curacao|    2|
|  2001|         USA|  899|
|  2001|        P.R.|   53|
|  2001|     Jamaica|    1|
|  2001|         CAN|   13|
|  2001|     Germany|    1|
|  2001| Philippines|    1|
|  2001|        V.I.|    2|
|  2001| South Korea|    3|
|  2001|      Panama|   10|
|  2001|       Aruba|    3|
|  2001|        D.R.|  109|
+------+------------+-----+
only showing top 20 rows

Pyspark Test Results

We convert our spark data frames to pandas data frames, so it is easy to save them in a human readable csv format. These files contain the answers to the questions we posed.

In [12]:
# Examples to show how to print the results to an output file
pandas_final = df_final.toPandas()
pandas_perc_change = df_perc_change.toPandas()
pandas_last_15 = df_last_15.toPandas()
pandas_final.to_csv('spark_question1_global_representation.csv')
pandas_perc_change.to_csv('spark_question1_global_change_last_15.csv')
pandas_last_15.to_csv('spark_question1_last_15.csv')
In [13]:
sc.stop()