Instructions

Fill in this lab worksheet at your own pace. Knit it periodically to check that things are working the same way they are when you are working in RStudio interactively. Ask questions, consult with others, use Google, etc. At the end of the class session, email what you have to yourself so you don’t lose progress, and finish it by the last class session on June 1. You will submit this as Homework 7 on Canvas (both Rmd and HTML files). These will be evaluated by Rebecca rather than peer reviewed.

You will want to have the following libraries loaded (you can add more in if needed):

library(stringr)
library(readr)
library(dplyr)
## 
## Attaching package: 'dplyr'
## The following objects are masked from 'package:stats':
## 
##     filter, lag
## The following objects are masked from 'package:base':
## 
##     intersect, setdiff, setequal, union
library(ggplot2)
library(ggmap)

Background

Last week we saw data from health inspections of restaurants in Seattle since 2012 and used them to practice working with character/string data and regular expressions. Load in the data directly from the URL (this will work because a CSV is just a text file) and use cache=TRUE so that we don’t have to repeat this each time we re-knit:

restaurants <- read_csv("https://www.dropbox.com/s/lu4tom4wldh76o5/seattle_restaurant_inspections.csv?raw=1",
                        col_types = "cDcccccnnciclccic")

As a reminder of what these data look like:

str(restaurants)
## Classes 'tbl_df', 'tbl' and 'data.frame':    68456 obs. of  17 variables:
##  $ Name                 : chr  "WALLINGFORD 50TH STREET MARKET" "WALLINGFORD 50TH STREET MARKET" "WALLINGFORD 50TH STREET MARKET" "WALLINGFORD 50TH STREET MARKET" ...
##  $ Date                 : Date, format: "2012-01-04" "2012-05-14" ...
##  $ Description          : chr  "Grocery Store-no seating - Risk Category I" "Grocery Store-no seating - Risk Category I" "Grocery Store-no seating - Risk Category I" "Grocery Store-no seating - Risk Category I" ...
##  $ Address              : chr  "2508 N 50TH ST" "2508 N 50TH ST" "2508 N 50TH ST" "2508 N 50TH ST" ...
##  $ City                 : chr  "Seattle" "Seattle" "Seattle" "Seattle" ...
##  $ ZIP                  : chr  "98103" "98103" "98103" "98103" ...
##  $ Phone                : chr  "(206) 633-4040" "(206) 633-4040" "(206) 633-4040" "(206) 633-4040" ...
##  $ Longitude            : num  122 122 122 122 122 ...
##  $ Latitude             : num  47.7 47.7 47.7 47.7 47.7 ...
##  $ Type                 : chr  "Routine Inspection/Field Review" "Routine Inspection/Field Review" "Consultation/Education - Field" "Consultation/Education - Field" ...
##  $ Score                : int  0 0 0 0 5 0 20 20 20 0 ...
##  $ Result               : chr  "Satisfactory" "Satisfactory" "Complete" "Complete" ...
##  $ Closure              : logi  FALSE FALSE FALSE FALSE FALSE FALSE ...
##  $ Violation_type       : chr  NA NA NA NA ...
##  $ Violation_description: chr  NA NA NA NA ...
##  $ Violation_points     : int  0 0 0 0 5 0 10 5 5 0 ...
##  $ Business_ID          : chr  "PR0001007" "PR0001007" "PR0001007" "PR0001007" ...

There are often multiple rows per Business_ID per Date, such as when an establishment is given violation points for multiple problems. The Result and Score columns will have the same values on those rows for the same restaurant and date, but details of the violation type (“red” or “blue”), violation description, and violation points will differ from row to row, with different violations on different rows. Keep this duplication in mind as you work. You will need to drop extra rows to get one row per business per date, or even one row per business. You can do this using the dplyr concepts we’ve studied like the distinct function, or group_by and summarize or filter to collapse over multiple rows.

Preparing the data

Fixing longitude

For some reason, negative signs weren’t stored with the Longitude variable (Seattle’s longitude should be around -122 degrees, not 122 degrees). Overwrite the Longitude column by taking Longitude and multiplying it by -1.

[YOUR CODE HERE]

Restaurants only

There are grocery scores without seating, school cafeterias, and other less relevant businesses in the data. We only want to look at restaurants. Identify and only keep businesses whose Description starts with Seating, e.g. "Seating 51-150 - Risk Category III". Call this new data frame with the rows filtered restaurants_only.

[YOUR CODE HERE]

Scores over time

Now make a data frame using restaurants_only called scores_over_time with exactly one row per Business_ID per inspection Date, with the business Name, its Address and ZIP, its Longitude and Latitude, and the value of Score on each inspection date. With data structured this way, you will be able analyze trends over time for each establishment. There should no longer be duplicate rows for when an establishment has multiple violations on a single date.

[YOUR CODE HERE]

Preparing to label bad scores

In order to label restaurants with bad scores (say, 40 and above), you’ll want to make a column called Label_40 on scores_over_time. It should have the Name if the Score is greater than or equal to 40, and be blank (i.e. "") if the Score is below that. Use mutate and ifelse to make this Label_40 column.

[YOUR CODE HERE]

Most recent scores

We’ll also want to look at just the most recent scores for each restaurant. Make a data frame called recent_scores from scores_over_time that has one row per Business_ID, with the business Name, its Address and ZIP, Longitude and Latitude, the most recent value of Score, the Date of that score, and Label_40. The slides from last week pertaining to looking at the most recent inspections of coffee shops have code that might help.

[YOUR CODE HERE]

Map-making

Mapping the recent scores

Now, use the ggmap package and the longitude and latitude information to plot the most recent inspection scores for restaurants on top of a map of Seattle. Experiment with zoom levels to get the right region bounds. Try coloring and/or sizing the points according to their most recent inspection score (bigger points = higher score). You can use scale_color_gradient to set the colors so that establishments with lower scores are white or gray, and establishments with higher scores are red, and scale_size to set the sizes. Play with these options and map settings until you get something you think looks good.

[YOUR CODE AND PLOT HERE]

The U District

Now repeat the plot, but zoomed in on the U District area. Add some text labels using Label_40 for businesses whose scores were 40 or higher on their most recent inspection. See the ggplot2 docs on geom_text and geom_label for how you can get these to look good, perhaps trying out the ggrepel package to avoid overlaps.

[YOUR CODE AND PLOT HERE]

Capitol Hill

Repeat the above, but for Capitol Hill instead.

[YOUR CODE AND PLOT HERE]

Scores over time

Sub-sampling the data

Now we want to look at inspection scores over time for restaurants, but there are far too many to visualize. Pick something more limited to investigate and subset the scores_over_time data to include somewhere between around 5 and 25 establishments. To do this, you’ll want to make a vector that has just the Business_ID or Name values of restaurants of interest, and then filter the scores_over_time data based on this. Some examples of angles you could choose for doing this subsetting:

  • Restaurants in your ZIP code
  • Your favorite chain restaurant
  • Diners
  • Coffee shops in a part of the city
  • A cuisine based on words in restaurant names (e.g. “Pho”)
  • Restaurants that have had a really bad score at some time – did they have previous bad scores, or were they mostly without problems before?

The string pattern matching tools from last week could be helpful depending on the criteria you choose.

[YOUR CODE TO SAMPLE HERE]

Mapping your subsample

Make a plot, appropriately cropped, showing the locations of the restaurants you’ve chosen with a dot for each restaurant and text labels.

[YOUR CODE AND PLOT HERE]