R Analysis 1: Logistic Regression & The 2017-18 TV Season

Hello everybody,

Yes, I know you all wanted to learn about MySQL queries, but I am still preparing the database (don’t worry it’s coming, just taking a while to prepare). And since I did mention I’ll be doing analyses on this blog, that is what I will be doing on this post. It’s basically an expansion of the TV show set from R Lesson 4: Logistic Regression Models & R Lesson 5: Graphing Logistic Regression Models with 3 new variables.

So, as we should always do, let’s load the file into R and get an understanding of our variables, with str(file).

17Aug capture

As for the new variables, let’s explain. By the way, the numbers you see for the new variables are dummy variables (remember those?). I thought the dummy variables would be a better way to categorize the variables.

  • Rating-a TV show’s parental rating (no not how good it is)
    • 1-TV G
    • 2-TV PG
    • 3-TV 14
    • 4-TV MA
    • 5-Not applicable
  • Usual day of week-the day of the week a show usually airs its new episodes
    • 1-Monday
    • 2-Tuesday
    • 3-Wednesday
    • 4-Thursday
    • 5-Friday
    • 6-Saturday
    • 7-Sunday
    • 8-Not applicable (either the show airs on a streaming service or airs 5 days a week like a talk show or doesn’t have a consistent airtime)
  • Medium-what network the show airs on
    • 1-Network TV (CBS, ABC, NBC, FOX or the CW)
    • 2-Cable TV (Comedy Central, Bravo, HBO, etc.)
    • 3-Streaming TV (Amazon, Hulu, etc.)

I decided to do three logistic regression models (one for each of the new variables). The renewed/cancelled variable (known as X2018.19.renewal.) is still the binary variable, and the other dependent variable I used for the three models is season count (known as X..of.seasons..17.18.).

First, remember to install (and use the library function for) the ggplot2 package. This will come in handy for the graphing portion.

17Aug capture2

Here’s my first logistic regression model, with my binary variable and two dependent variables (season count and rating). If you’re wondering what the output means, check out R Lesson 4: Logistic Regression Models for a more detailed explanation.

17Aug capture3

Here are two functions you need to help set up the model. The top function help set up the grid and designate which categorical variable you want to use in your graph. The bottom function helps predict the probabilities of renewal for each show in a certain category. In this case, it would be the rating category (the ones with TV-G, TV-PG, etc.)

19Aug capture5

Here’s the ggplot function. Geom_line() creates the lines for each level of your categorical variable; here are 5 lines for the 5 categories.

19Aug capture6

19Aug capture4

Here’s the graph. As you see, there are five lines, one for each of the ratings. What are some inferences that can be made?

  • The TV-G shows (category 1) usually have the lowest chance of renewal. In this model, a TV-G show would need to have run for approximately a minimum of 22 seasons for at least 50% chance of renewal. (Granted, the only TV-G show on this database is Fixer Upper, which was not renewed)
  • The TV-PG shows have a slightly better chance at renewal as renewal odds for these shows are at least 25%. To attain at a minimum 50% of renewal, these shows would only need to have run for approximately a minimum of 17 seasons, not 22 (like The Simpsons).
  • The TV-14 shows have a minimum 50% chance of renewal, regardless of how many seasons they have run. They would need to have run for at least 25 seasons to attain a minimum 75% chance of renewal, however (SNL would be the only applicable example here, as it was renewed and has run for 43 seasons).
  • The TV-MA shows have a minimum 76% (approximately) chance of renewal no matter how many seasons they have aired. Shows like South Park, Archer, Real Time, Big Mouth and Orange is the New Black are all TV-MA, and all of them were renewed.
  • The unrated shows had the best chances at renewal, as they had a minimum 92% (approximately) chance at renewal. (Granted, Watch What Happens Live! is the only unrated show on this list)

Next, we repeat the process used to create the plot for the first model for these next two models.

19Aug capture3

19Aug capture7

19Aug capture9

19Aug capture8

What are some inferences that can be made? (I know this graph is hard to read, but we can still make observations from this graph.

  • The orange line (representing Tuesday shows) is the lowest on the graph, so this means Tuesday shows usually had the lowest chances of renewal. This makes sense, as Tuesday shows like LA to Vegas, The Mick, and Rosanne were all cancelled.
  • On the other end, the pink line (representing shows that either aired on streaming services, did not have a consistent time slot, or aired every day like talk shows) is the highest on the graph, so this means shows without a regular time slot had the best chances at renewal (such as Atypical, Jimmy Kimmel Live!, and House of Cards).

19Aug capture10

19Aug capture11

19Aug capture13

19Aug capture12

What inferences can we make from this graph?

  • The network shows (from the 5 major broadcast networks CBS, ABC, NBC, FOX and the CW) had the lowest chances at renewal. At least 11 seasons would be needed for a minimum 50% chance of renewal.
    • Some shows would include The Simpsons (29 seasons), Family Guy (16 seasons), The Big Bang Theory (11 seasons), and NCIS (15 seasons), all of which were renewed.
  • The cable shows (from channels such as Comedy Central, HBO, and Bravo) have a minimum 58% (approximately) chance of renewal, but at least 15 seasons would be needed for a minimum 70% chance of renewal.
    • Some shows would include South Park (21 seasons) and Real Time (16 seasons), both of which were renewed.
  • The streaming shows (from services such as Netflix, Hulu, or CBS All Access) had the best odds for renewal (approximately 76% minimum chance at renewal). At least 30 seasons would be needed for a 90% chance at renewal.
    • This doesn’t make any sense yet, as streaming shows have only been around since the early-2010s.

Thanks for reading, and I’ll be sure to have the MySQL database ready so you can start learning about querying.

Michael

 

MySQL Lesson 2: Launching the Database & Inserting Records

Hello everybody,

It’s Michael, and I thought the prefect place to continue from last post would be to show you guys how to launch the database as well as insert records into the database.

But first, I have some corrections to make. This will be the diagram we will use

5Aug capture1

It’s similar to the one in the previous post, except for one less foreign key in the Awards table that should not have been there along with some other added and modified attributes such as

  • The release year for album has been changed to release date.
  • The featured artist and singer tables have new and/or modified attributes, which include
    • Gender-the gender of the singer/featured artists (it can be null if we are analyzing a group)
      • This is the only addition to the singer table.
    • Age-the age of the featured artist as of August 1, 2018.
    • Birthplace-the birthplace of the featured artist (or where a group was formed)
    • Death-the date of death of the singer/featured artist
      • This was a new column for the featured artist table, but it was on the singer table (I just changed the name from “Date of Death” to Death”)

Now that I got that clarified, the next question would be “How do we launch the database?” We do so with a process called forward engineering (under the database drop-down menu click forward engineering). Forward engineering allows us to export our diagram to an SQL server.

  • You’ll also see an option for reverse engineering in the drop-down menu. You won’t need it to launch the database, but just if you’re wondering what reverse engineering is, it’s essentially the opposite of forward engineering, where you can extract the ER diagram from a launched database (this process can come in handy if you want to modify attributes or relationships in the diagram, but remember to forward engineer again)

 

Alright, now here’s how to forward engineer your database.

First step is connection options. Choose “new connection” for stored connection and “Standard (TCP/IP)” for connection method. Keep everything else as is.

30Jul capture1

Next step would be setting up options for the database to be created. Personally, the only two boxes I would check include “Generate INSERT statements for tables” and “Include model attached scripts”.

30Jul capture2

Next we have to select the objects to forward engineer. Since there are only table objects so far in this diagram, then table objects are the only thing we will be forward engineering. If you’re wondering what the show filter button does, it just allows you to select which tables you don’t want to (or want to) include in the final diagram. Since all five tables are relevant to the database, ignore the show filter column.30Jul capture4

If your forward engineering process succeeds, then you will see green checkmarks by each item and the message “Forward Engineer Finished Successfully”.

  • However, if your forward engineering process had errors, then you will be notified. There will be a white box showing you what exactly your error is. This happened to me the first time I tried to forward engineer, as the fields of data type TIME() had 10 as the maximum length while 6 can be the maximum length for fields with data type TIME(). I fixed the error, ran the forward engineering process again, and it worked, as shown below.

30Jul capture5

Now let’s check to see if our database successfully launched.To do so, click on the schemas tab, then click the loading icon. If you see something called “mydb”, then the database successfully loaded onto the MySQL server.

1Aug capture1

Now our database is active, but it’s also empty. So, let’s fill it up (and we’re gonna need to fill up all 5 tables separately). So, we use SELECT * FROM mydb.(whatever table you want to insert data into) to first check out the table (the output is shown in the bottom half of the screen). The output, as seen on the bottom half of the picture, shows that the table is empty.

3Aug capture1

Now let’s add a record to the table and see what happens.

  • Note-this was before I decided to add a gender field. But the procedure is basically the same.

3Aug capture23Aug capture3

If the “Apply Script Process” is successful, then the next time you run the (SELECT * FROM mydb.Singer) prompt, you should see the record added into the database.

3Aug capture4

The same procedure applies to fill in other records for this table

3Aug capture6

  • The process wasn’t successful for me at first, but this was only because my “Date of Death” values should have been formatted like Year-Month-Day, not Month/Day/Year.

Here’s a screenshot of the database with the gender field filled out.

5Aug capture2

Now let’s fill out the album table (because it connects to the idSinger column)

  • And if you’re wondering what to put for Singer_idSinger, refer back to the Singer table to figure out which primary key in that table corresponds to the album.

5Aug capture3

And if forward engineering succeeds, then this record should pop up the next time you run (SELECT * FROM mydb.Album).

5Aug capture4.png

Let’s add two more records to see what happens.

5Aug capture5

Here’s the output, and in case you’re wondering, I set the idAlbum primary key field to auto-increment, so all I had to do was type 1 as the Hybrid Theory primary key, then the primary keys for the rest of the albums were automatically generated.

5Aug capture6

If you know how to fill in one table, then you can figure out how to fill out the rest. I’ll actually get into querying with my next post.

Thanks for reading,

Michael

MySQL Lesson 1: Building an ER Diagram

Hello everybody,

This is Michael, and as I mentioned in the last post, I will start building the database that I will be using for this series of posts.

The database will store information about 54 albums (3 for each year from 2000 to 2018) such as track listings, artists, featured artists on certain tracks, genre, release year, duration (of album and individual track), etc.

In the previous post, I did mention that MySQL is meant for query-based analysis. However, before beginning to do queries, we must create our database. As the title explains, this post will focus on the creation of an ER (entity-relationship) diagram. An ER diagram is a graphical representation of items in a database (in this case, albums, track listings, artists, etc.) and how they are related to each other (like how albums can have several track listings).

So, without further hesitation, here is the ER diagram for the database I will be using.

28Jul capture1

Now you may be confused by all of the arrows and tables in the diagram. Here’s an explanation.

  • This ER diagram will represent the relationship between albums and singers, songs on the album, featured artists, as well as any awards the album either won or received nominations.
  • How do all of these tables relate to each other? Here’s how.
    • Each album must have several tracks, while each track belongs to one and only one album (the thing that looks like a three-pointed arrow means “many” while the thing with two vertical lines represents “one”)
    • Each track can have several, one, or no feature artists (that’s why you see a circle) but each featured artist must belong to one and only track.
    • Each singer can appear more than once (if they have several albums in the database) or just once but each album must correspond to one and only one singer.
    • Each album can be nominated for one or several awards (eg. Grammys, MTV VMAs, etc.) but each award must correspond to only one album.
      • You’ll notice that this is the only dotted line in the diagram. This is because the relationship between album and awards is non-identifying, meaning that you can identify the award based on idAwards field alone, without needing the album field for identification.
      • As for the rest of the relationships (which are known as identifying relationships), each table is dependent on the other table for identification
        • For example, you can’t identify a featured artist without knowing what track they appear on. Likewise, you can’t identify a song without knowing what album it is a part of. Nor can you identify an album without knowing which singer/group created it.

Now what about the attributes in each table (those are the things with diamonds right by them)? Here’s what each of them mean.

  • The album field contains the attributes
    • Name-the name of the album
    • Duration-how long the album is (given in hours:minutes:seconds)
    • Release Year-the year the album came out
    • Album number-how many albums has the artist made up through that point; in other words, is this the artist’s 1st album? 4th? 5th?
    • Genre-the genre of the album
  • The singer field contains the attributes
    • Singer Name-the singer’s (or group’s) name
    • Age-the singer’s age as of August 1, 2018 (if they are still living)
    • Birthplace-the singer’s birthplace (or where the group was formed)
    • Date of Death-the date the artist died
      • You’ll notice the attributes age and date of death have white diamonds right by them; this is because each of them can be null (have no value). For instance, the date of death field can stay blank for living artists. For the other attributes that have blue diamonds besides them, they have to have some sort of value (can’t be null in other words).
  • The tracks field contains the attributes
    • Track Name-the song’s name
    • Track Duration-the length of the song (given in hours:minutes:seconds)
  • The featured artist field contains the attribute
    • Featured artist name-the name of any artist who appears on a particular track
  • The awards field contains the attributes
    • Ceremony-the ceremony where the album either was nominated for or won an award (Grammys, Billboard Music Awards, etc.)
    • Ceremony Year-the year of the ceremony where the album either got nominated for or won an award
    • Won/Nominated-whether an album won or was nominated for a particular award

But wait, what are those keys right by some of the attributes? Those are called primary keys, which are one or more columns with data used to uniquely identify each row in the table. Primary keys are usually stored in auto-incrementing indexes (starting with 1, then 2, then 3, and so on) to ensure uniqueness. For example, in the album table, 1 would be the primary key for the first album in the database, then 2, then 3, all the way to 54.

Take this part of the diagram:

28Jul capture2

 

idSinger and idAlbum are both primary keys in their respective tables. But wait, why does Singer_idSinger1 appear in the album table? That is because Singer_idSinger1 is a foreign key, which is a column or set of columns in a table that refers to the primary key in another table-which would be the primary key for the singer table. Foreign keys basically serve as a means to connect the referencing table (album) with the referenced table (singer).

If you want to know how primary keys and foreign keys differ from each other, here’s a handy table (Source-https://www.essentialsql.com/what-is-the-difference-between-a-primary-key-and-a-foreign-key/)

Comparison of Primary to Foreign Key Attributes

That’s all for now. Thanks for reading,

Michael

What is MySQL?

Hello everybody,

This is Michael, and as I mentioned in the welcome post, I will include other programming languages on this blog. Don’t worry, I’ll still post plenty of R lessons and analyses, but I thought it was time to include other programming languages. The next one I will introduce is MySQL, which is an open-source (meaning free to use) relational database system.

  • Relational databases are created to recognize relations among items in a database. Let’s say you wanted to make a database of NFL teams and include team name, quarterback, running back, center, safety, linebacker, wide receiver, and any other football positions I missed here, along with season record. Team name would be related to any of the positions I just mentioned, as teams have someone for each of the positions. Team name would also be related to season record, as each team has a win-loss-or-sometimes-tie record each year.

To clarify, MySQL and SQL are two totally different things, being that MySQL is database management software whereas SQL is a programming language (it stands for structured query language) used to manage relational databases.

Another thing I wanted to point out is that MySQL and R-although they are both great analytical tools-serve two different purposes. Personally, I would use R to analyze data from a statistical standpoint (as seen by my logistic regression posts) while I would use MySQL for query-based analysis. Each tool has its pros and cons, as R is better for analysis and visualization of data yet the syntax is more complicated than MySQL (that’s just my opinion). Likewise, MySQL is great for query-based analysis, which is more difficult to do in R, but isn’t the best for performing advanced analyses or creating data visualizations. MySQL is also restricted to relational databases, while R is not.

For this series of posts, I’ll build a database (I’ll be using the same database throughout this series of MySQL posts) using MySQL Workbench, which I’d recommend for anyone wanting to make their own MySQL databases. If you want to install it, here’s a handy link-http://www.ccs.neu.edu/home/kathleen/classes/cs3200/MySQLWorkbenchMAC10.pdf

Don’t worry everybody, I’ll actually start building the database with my next MySQL post. This post was just meant to explain the basics of MySQL.

Thank you for reading,

Michael

 

R Lesson 5: Graphing Logistic Regression Models

Hello everybody,

It’s Michael, and today I’ll be discussing graphing with logistic regression. This will serve as a continuation of R Lesson 4: Logistic Regression Models (I’ll be using the dataset and the models from that post).

Let’s start by graphing the second model from R Lesson 4. That’s the one that includes season count and premiere year (I feel this would be more appropriate to graph as it is the more quantitative of the two models).

Here’s the formula for the model if you’re interested (as well as the output):

20Jul capture1

Now let’s plot the model (but first, let’s remember to install the ggplot2 package).

20Jul capture5

Next we have to figure out the probabilities that each show will be renewed (or not).

20Jul capture6

And finally, let’s plot the model.

20Jul capture7

20Jul capture4

What are some conclusions we can draw from the model?

  • The shows with less than 25 seasons and that premiered between 1975 and the early 90s (such as Roseanne which had 10 seasons and premiered in 1988) had no chance at renewal.
  • For shows with less than 25 seasons, the more recently the show premiered, the more likely it was renewed (as shown by the progressively brighter colors).
  • For the few outlier shows with more than 25 seasons (regardless of when they premiered) they had a 100% chance at renewal.
    • The two notable examples would be The Simpsons (at 29 seasons) and SNL (at 43 seasons)

Thanks for reading,

Michael

 

 

 

 

 

 

 

 

 

R Lesson 4: Logistic Regression Models

Hello everybody,

It’s Michael, and today’s post will be the first to cover data modeling in R. The model I will be discussing is the logistic regression model. For those that don’t know, logistic regression models explore the relationship between a binary* dependent variable and one or more independent variables.

*refers to variable with only 2 possible values, like yes/no, wrong/right, healthy/sick etc.

The data set I will be using is-TV shows-which gives a list of 85 random TV shows of various genres that were currently airing during the 2017-18 TV season and whether or not each show was renewed for the 2018-19 TV season. So, like any good data scientist, let’s first load the file and read (as well as understand) the data.

9Jul capture1

The variables include

  • TV Show-the name of the TV show
  • Genre-the genre of the TV show
  • Premiere Year-the year the TV show premiered (for reboots like Roseanne, I included the premiere date of the original, not the revival)
  • X..of.seasons..17.18. (I’ll refer to it as season count)-how many seasons the show had aired at the conclusion of the 2017-18 TV season (in the case of revived shows like American Idol, I counted both the original run and revival, which added up to 16 seasons)
  • Network-the network the show was airing on at the end of the 2017-18 TV season
  • X2018.19.renewal. (my binary variable)-Whether or not the show was renewed for the 2018-19 TV season
    • You’ll notice I used 0 and 1 for this variable; this is because it is a good idea to use dummy variables (the 0 and 1) for your binary dependent variable to help quantify qualitative data.
      • The qualitative data in this case being whether a show was renewed for the 2018-19 TV season (shown by 1) or not (shown by 0)

 

Now that we know the variables in our data set, let’s figure out what we want to analyze.

  • Let’s analyze the factors (eg. network, genre) that affected a certain TV show’s renewal or cancellation (the binary variable represented by 0/1)

So here’s the code to build the model, using the binary dependent variable and two of the independent variables (I’ll use genre and premiere year)

9Jul capture2

9Jul capture3

9Jul capture4

What does all of this output mean?

  • The call just reprints the model we created.
  • The estimate represents the change in log odds (or logarithm of the odds) for the dependent variable should a certain independent variable variable be increased by 1.
    • Log odds function–>log(p/(1-p))
    • For instance, if the premiere year increases by 1 (let’s say from 2009 to 2010), the odds that it was renewed for the 18-19 TV season decrease by 6.73% (as evidenced by the -0.06763 as the premiere year estimate)
  • Standard error represents how far the sample mean is from the population mean. In the case of premiere year, the two means are close together. In the case of genre however, the two means are mostly far apart (then again, genre isn’t numerical).
  • Z-value is the ratio of the estimate to the standard error
  • P-value (denoted by Pr(|>z|)) helps you determine the significance of your results by giving you a number between 0 and 1
    • P-values are used to either prove or disprove your null hypothesis (a claim you are making about your data)
      • Let’s say you think a show’s genre and premiere year affected its chances of renewal; this would be your null hypothesis.
      • Your alternative hypothesis would be the opposite of your null hypothesis; that is, genre and premiere year don’t affect a shows chances of renewal
    • Small p-values (those <=0.05) indicate strong evidence against the null hypothesis, so in these cases, you can reject the null hypothesis. For p-values larger than 0.05, you should accept the null hypothesis.
      • Since all the p-values are well above 0.05, you can accept the null hypothesis
  • Null deviance shows how well our dependent variable (whether or not a show got renewed) is predicted by a model that includes only the intercept
  • Residual deviance shows how well our dependent variable (whether or not a show got renewed) is predicted by a model that includes the intercept as well as any independent variables
    • As you can see here, the residual deviance is 89.496 on 71 degrees of freedom, a decrease of 20.876 from null deviance (as well as a decrease of 13 degrees of freedom).
  • AIC (or Akaike Information Criterion) is a way to gauge the quality of your model through comparison of related models; the point of the AIC is to prevent you from using irrelevant independent variables.
    • The AIC itself is meaningless unless we have another model to compare it to, which I will include in this post.
  • The number of Fisher scoring iterations shows how many times the model ran to attain maximum likelihood, 17 in this case. This number isn’t too significant.

Now let’s create another model, this time including season count in place of genre.

10Jul capture1

How does this compare to the previous model?

  • There is a smaller difference between null & residual deviance (12.753 and 2 degrees of freedom, as opposed to 20.876 and 13 degrees of freedom)
  • The AIC is 13.88 smaller than that of the previous model, which indicates a better quality of the model
  • The number of Fisher scoring iterations is also lower than the previous model (5 as opposed to 17), which means it took less tries to attain maximum likelihood (that a show was renewed)
  • The estimate for premiere year also increased
    • This time, if premiere year increases by 1, the odds that a show was renewed for the 2018-19 TV season increased by 14.81%, rather than decreased.
    • If season count increased by 1 (say from 4 to 5 seasons), then the odds a show was renewed increased by 31.45%
  • The asterisk by season count just gives an idea of the range of p-values of season count (denoted by Pr(|>z|))
    • The p-value of season count is >0.01 but <0.05 (which makes perfect sense as Pr(|>z|) is 0.027
  • Let’s create two null hypotheses-premiere year and season count affects a show’s chances of renewal (we are treating these as separate hypotheses).
    • Premiere year is greater than 0.05, so accept this null hypothesis.
      • In other words, premiere year did affect a show’s chances for renewal.
    • Season count is less than 0.05, so reject this null hypothesis.
      • In other words, season count didn’t affect a show’s chances for renewal.

That’s all for now. Thanks for reading.

Michael

 

R Lesson 3: Basic graphing with R

Hello everybody,

This is Michael, and today’s post will be on basic graphing with R. I’ll be using a different dataset for this post-murder_2015_final , which details the change in homicide rates from 2014 to 2015 as well as the individual homicide rates for 2014 and 2015 in 83 US cities (I felt this one was more quantitive than the dataset I used in my last two posts).

So let’s begin with a bar chart.

29Jun capture2

  • If you can’t read this, here’s the code
    • plot(file$X2015_murders, file$change, pch=20, col=”red”, main=”2014-2015 murder rate changes”, xlab=”2015 murders”, ylab=”Change from 2014 homicide rate”)

29Jun capture1

As you can see, there are two outliers at the upper-right hand corner of the screen. If you want to find out what those cities might be, here’s how you would add labels to each of the points.

1jul-capture2.png

  • Remember not to close the window with the graph when typing this command!

1Jul capture

From this graph, we can see that the two outliers (or cities with the largest 2014-to-2015 rise in murder rates) are Chicago and Baltimore.

Let’s try a bar graph now. Here’s the command to make a basic bar chart.

1Jul capture4

1Jul capture3

As you can see, 53 of the cities had a year-to-year rise in murder rates, 4 had no change in murder rates, and 26 had a year-to-year drop in murder rates (if you’re wondering what those cities are, check the spreadsheet attached to this post).

Let’s make another graph-the box plot. Here is the command

1Jul capture6

1Jul capture5

Some things to know when reading a box plot

  • The bold dashes represent the median value for the murders in a certain state (or only value if a state appears just once)
  • The top and bottom lines represent the lowest and highest values corresponding to a certain state
  • The yellow bars denote the range of the majority of values for a certain state
  • The dashed lines on the top and bottom of the chart show the highest and lowest values not in the range denoted by the yellow bar
    • If there aren’t any dashed lines, then the yellow bars denote all of the values, not just the majority
  • Any circles you see are outliers corresponding to a particular state.

 

One more thing, if you’re wondering where I got this data from, here the website-https://github.com/fivethirtyeight/data/blob/master/murder_2016/murder_2015_final.csv. The website is FiveThirtyEight.com, which writes interesting data-driven articles, such as  The Lebron James Decision-Making Machine. FiveThirtyEight then posts the code and data used in these articles on GitHub so anyone can perform statistical analyses on the data (good place to look for free datasets for your own data analysis project, and much more interesting than the free datasets that come with R with data 40+ years old).

Thank you,

Michael

R Lesson 2: Basic summarization of R Data

Hello everybody,

This is Michael, and today’s post will be about basic summarization of R Data. I thought this would be an appropriate place to continue from R Lesson 1: Basic R commands (I’ll be using the dataset from that post).

Let’s start off simple by using the summary() command to display a summary of the age field.

27Jun capture1

As you can see, the output shows the minimum age for any congressperson (25), the end of the 1st quartile (45.4), the median age (53), the mean age (53.31), the beginning of the third quartile (60.55), and the maximum age (98.1). But what does this all mean?

  • The minimum is obviously the minimum age amongst the congresspeople-25 years
  • The 1st quartile is the age between the minimum and the mean (45.4)
    • In other words, the youngest 25% of congresspeople were between 25 and 45.4 years old (at the start of their terms)
  • The median is the center of all the ages amongst the congresspeople-53 years in this case
    • 50% of congresspeople were between 45.4 and 60.55 years old (at the start of their terms)
  • The mean is the average of all the ages
  • The 3rd quartile is the age between the median and the maximum (60.55)
    • In other words, the oldest 25% of congresspeople were between 60.55 and 98.1 years old (at the start of their terms)
  • The maximum is obviously the maximum age amongst the congresspeople-98.1 years

However, if you use summary on a non-numeric field, such as state, the counts for each observation (in this case, how many times each state appears in the dataset).

27Jun capture2

Another summary command I will discuss is table(), which shows all values of a variable along with each values’ frequencies (how many times that value appears in the dataset).

Below is a table displaying the number of congresspeople who are representatives, as well as those who are senators.

27Jun capture3

Now here’s what the table would look like if we add another value (I’ll use congress)

27Jun capture4

Like the last table, this table shows the number of representatives and senators, except divided up by congress (80th to 113th specifically).

Thanks for reading,

Michael

 

 

R Lesson 1: Basic R commands

Hello everyone,

It’s Michael, and I thought a perfect first post (aside from my welcome post) would be an intro to the wonderful, statistical and completely free software known as R. The dataset I will use will be congress-terms.csv, which I have attached to this post.

To start we will first upload the file onto R. If you are wondering how to do that, here’s the command:

  • dataFile <- read.csv(“/Users/michaelorozco-fletcher/Downloads/congress-terms.csv”)

You may choose different a different variable name. Your file path will be different too. To know what your file path is, open up Excel, then click File > Properties. This window will pop up.

screengrab2

The location field would be your file path (along with a slash and the file name, congress-terms.csv in this case, after “Downloads”).

Allright, now that I explained how to read a CSV file onto R, here are some basic R commands.

screengrab1

str(dataFile) displays a summary of all the data fields in the file, which is important for understanding the data you are working with. As mentioned above, there are 18635 observations of 13 variables, which include

  • congress-which term of Congress does a particular congressperson serve in (anywhere from the 80th-lasting from 1947 to 1949-to the 113th-lasting from 2013 to 2015)
  • chamber-whether a particular congressperson is a part of the House or Senate
  • bioguide-each congressperson’s ID Number within the Biographical Directory of the United States Congress
  • firstname, middlename, lastname-These are self-explanatory
  • suffix-A “Jr.” or “III” or something like that at the end of a particular congressperson’s name
  • birthday-Again, self-explanatory
  • state-What state the congressperson serves
  • party-A congressperson’s party affiliation, whether D for Democrat, R for Republican, I for independent, among others
  • incumbent-whether a congressperson was in office at the beginning of a particular term (such as the 110th Congress) or came into office after another congressperson left
  • termstart-when a term of Congress began
  • age-how old a congressperson was when a term began

 

Now lets check out some other basic commands. I used the age field because it is the field with the most numbers.

screengrab3

Above you will find the mean, sd (standard deviation-square root of variance), var (variance-the standard deviation squared), max, and min for the age field. Some inferences we can make include

  • There is a fair spread among the ages (10.67 years, as given by sd)
  • The ages are quite spread out from the 53.31 mean (as given by the 114.03 var)
  • The oldest congressperson was almost 100 when his term began (J. Strom Thurmond, 1902-2003)

These are just a few of the basic commands. For more commands check out https://www.calvin.edu/~scofield/courses/m143/materials/RcmdsFromClass.pdf

Here’s the spreadsheet: congress-terms

Thank you,

Michael

Welcome

Hello readers,

My name is Michael, and this is my data science blog. Here you will find plenty of information about data science-ranging from how-to posts to analyses with actual datasets. I’ll mostly focus on MySQL, R, and Excel (with Java lessons from time to time) for now, though I may eventually add other analytical tools like Python.

Thank you,

Michael

interactive-line-graph