Welcome!

I am a physical oceanographer interested in how ocean water is mixed and transformed. I am currently working as Research Scientist at the Bedford Institute of Oceanography in Halifax, Nova Scotia.

Recent Posts:

A Shiny app that uses 'reactive' data

Introduction

A powerful, and increasingly useful, tool available to users of R, is the interactive app package known as Shiny. Shiny provides a framework for building interactive web apps that can be used for visualization, exploratory data analysis, data quality control – really anything that could benefit from an interactive environment.

This post will not go into the basics of Shiny. There are many resources on the web for getting started, and of particular value is the RStudio Shiny gallery.

The purpose of this post is two-fold: to test the embedding of a Shiny app within a jekyll blog post, and to document a test app that I made that uses “reactive data”.

Using Shiny apps with jekyll

In order to run a Shiny app, a “shiny server” is required. The easiest way to do this is to install the necessary R packages (e.g. install.packages('shiny')) on your computer, and run a “local” server. This is very easy to do within RStudio, but it can also be done from the system command line, e.g. using a command something like:

$ Rscript -e "library(shiny); runApp(port=4321)"

Then, opening a web browser to the url 127.0.0.1:4321 will display and run the app. However, the app is only available on the machine that it is being run, and not as a distributed web app that other users can connect to.

For webs that are hosted and made available to others, a proper Shiny server environment must be installed on a properly configured web server. Thankfully Shiny Server is free (and open source), however the configuration of a proper server is not a task for the uninitiated.

Another option is to use the ShinyApps.io service that is provided by RStudio. The free option allows for 5 applications, with a limited amount of server time available (25 hours per month at the time of writing).

For a Github-pages hosted site (like this one), the lack of an R environment and a shiny server on Github means that any apps to be included in posts must be hosted somewhere else. For this example, I will use my shinyapps.io account.

Reactive data

The premise of “reactive data” (actually, the concept of “reactive” in the context of Shiny apps is really a fundamental principle) is one where the data being used in the app may change from time to time. In a regular R script, loading data using commands like load(), read.csv(), etc is easy and will occur every time the script is run. However, in a shiny app, using such commands may result in the data being loaded only when the server is started, but then not updating with time as the data being loaded is changed (e.g. if more data is downloaded).

To get around this, we use the shiny function reactiveFileReader(), which allows the data reading to occur as a “reactive” object, meaning that it can be updated (at prescribed intervals) even while the app is running.

The example code below is a simple app I made to demonstrate the principle. In it, the user clicks the action button to generate new data, which is saved as a .rds object in the server file system. That rds object is then read by the file reader, which checks for changes every 1000 milliseconds (1 second) and only re-reads the file if the data has changed when it checks. It’s a silly way to make an interactive plot that updates with new data, but demonstrates the principle of reactive data well.

library(shiny)

new_data <- function() {
    d <- list(a=1:100, b=rnorm(100))
    saveRDS(file='data.rds', d)
}
new_data()

server <- function(input, output, session) {
    data <- reactiveFileReader(1000,
                               session,
                               'data.rds',
                               readRDS)

    observeEvent(input$newdata, {
        new_data()
    })
    
    output$plot1 <- renderPlot({
        hist(data()[['b']])
    })
}

ui <- fluidPage(    
    
    ## Give the page a title
    titlePanel("Title"),
    
    ## Generate a row with a sidebar
    sidebarLayout(      
        
        ## Define the sidebar with one input
        sidebarPanel(
            actionButton(inputId = 'newdata',
                         label = 'Generate new data')
        ),
        
        ## Create a spot for the barplot
        mainPanel(
            plotOutput("plot1")  
        )
        
    )
)

shinyApp(ui = ui, server = server)

The app

To include the app in the post, we upload it to shinyapps.io, and then include it as an iframe object, with (note the html comment tags around the iframe code chunk to keep it from evaluating in HTML):

<!-- <iframe id="example1" src="https://clarkrichards.shinyapps.io/reactiveData/" 
style="border: non; width: 100%; height: 500px" 
frameborder="0">
</iframe> -->



Using the oce package to make nice maps

Introduction

Making maps is a pretty important part of doing, and presenting, ocean data analyses. Except for very small domains, using map projections is crucial to ensure that the map is not distorted. This is particularly true for polar and high latitude regions, such as the Arctic (where I do much of my work).

In this post I will give a brief introduction to making projected maps with the oce package, including not just the land/coastline but also various ways of plotting the bathymetry. For the latter I will discuss the handy marmap package.

Making a polar stereographic map of the Canadian Arctic

To make “projected” maps with oce, you can use the mapPlot() function. The easiest way is to use mapPlot() with a coastline object, to set the projection you want, and then add other elements (such as bathymetry, etc) using functions like mapLines(), mapPoints(), mapImage(), etc. You may need to redo the coastline at the end to clean up anything that might have plotted over land that you don’t want.

The coastlineWorldFine dataset from the ocedata package is pretty good for sub-regions such as the Canadian Arctic.

library(oce)
## Loading required package: testthat
## Loading required package: gsw
library(ocedata) #for the coastlineWorldFine data
data(coastlineWorldFine)

To get the projection you want, it must be passed in the projection= argument using “proj4” syntax as a character string. You can read up on the syntax and available projection in the help – i.e. ?mapPlot, or have a look at:

[PROJ4]https://proj.org/operations/projections/index.html)

For polar maps, the most commonly used is the stereoraphic projection:

## Save it to a function to make it easy to re-run
mp <- function() {
    mapPlot(coastlineWorldFine, projection="+proj=stere +lon_0=-90 +lat_0=90",
            longitudelim = c(-120, -60),
            latitudelim = c(60, 85), col='grey')
}
mp()

plot of chunk unnamed-chunk-2

In this example, the +lon_0 parameter defines the longitude at the center of the projection, and the +lat_0 is the latitude at the center of the projection. The mapPlot() arguments longitudelim and latitudelim are used to control the extent of the map.

As for adding bathymetry, there are a few options:

  1. oce includes a low-res version of the etopo dataset, called topoWorld that can be used, either as a contour plot or as an image plot.

  2. The marmap package allows easy downloading of subsets of the full resolution etopo data, which can be used to make nicer plots. One downside is that because of the “triangle” nature of a polar stereoraphic projection you have to download a lot more data than you really need.

Using topoWorld

A quick way of adding bathymetry is to simply use the mapContour() function (analogous to the base-R contour()).

data(topoWorld)
mp()
mapContour(topoWorld, levels=-c(500, 1000, 2000, 4000),
           drawlabels=FALSE)

plot of chunk unnamed-chunk-3

Contour plots can be tricky, because of the labeling etc, and choosing which contours to plot. Another option is to use the mapImage() function:

mp()
mapImage(topoWorld, col=oceColorsGebco, breaks=seq(-4000, 0, 500))
mapPolygon(coastlineWorldFine, col='grey')
mapGrid()

plot of chunk unnamed-chunk-4

But sometimes using mapImage() shows off the “blockiness” of the low-res topoWorld dataset – especially near the poles where the shape of the cells (evenly divided in lon/lat space) gets really skinny.

One fix is to use a higher res data set (so the “boxes” are smaller – see next section). Another is to use the fillContour argument in mapImage() to plot filled contours rather than the individual grid cells.

mp()
mapImage(topoWorld, col=oceColorsGebco, breaks=seq(-4000, 0, 500), filledContour = TRUE)
mapPolygon(coastlineWorldFine, col='grey')
mapGrid()

plot of chunk unnamed-chunk-5

Using marmap

Here we load the marmap package and download the bathymetry. Note that because we have the North Pole in view, we basically have to download an entire half hemisphere. It takes a little while to download, but if you use the keep=TRUE argument then it will just reload it from the file each time.

However, because there is so much more data, the plotting can be quite a bit slower.

library(marmap)
## 
## Attaching package: 'marmap'
## The following object is masked from 'package:oce':
## 
##     plotProfile
## The following object is masked from 'package:grDevices':
## 
##     as.raster
b <- as.topo(getNOAA.bathy(-180, 0, 55, 90, keep=TRUE))
## File already exists ; loading 'marmap_coord_-180;55;0;90_res_4.csv'
mp()
mapImage(b, col=oceColorsGebco, breaks=seq(-4000, 0, 500))
mapPolygon(coastlineWorldFine, col='grey')
mapGrid()

plot of chunk unnamed-chunk-6

Looks pretty good, though!

The (ocean) physics of The Ocean Cleanup's System 001

Introduction

The Ocean Cleanup, brainchild of Dutch inventor Boyan Slat, was in the news again this past week after announcing that in addition to the fact that their system is unable to collect plastic as intended, it suffered a mechanical failure. “Wilson” is currently being towed to Hawaii, where it will undergo repairs and upgrades, presumably to be towed back out to the garbage patch for a second trial.

I am not a mechanical engineer, so I don’t intend to comment on the details of their mechanical failure. I am, however, a sea-going oceanographer. Which means that I am used to the sorts of situations with scientific research equipment that was so succinctly paraphrased by Dr. Miriam Goldstein:

“The ocean is strong and powerful, and likes to rip things up.”

Dr. Miriam Goldstein. Prescient oceanographer

In short – the ocean is a difficult place to work. There are literally CONFERENCES dedicated to the engineering of putting thing out to sea and having them survive (see the MTS Buoy Workshop, which I have participated in). There is a saying in oceanographic fieldwork: if you get your gear back, it was a successful program. If it recorded data – that’s icing on the cake.

Designing for physics

But beyond the engineering, there are the questions of what the physics are that TOC are relying on for their system to be successful. Some of you may recall that the original design was to moor (i.e. anchor) their device in 6000m (20000 feet) of water, and let existing ocean currents sweep garbage into the U-shaped structure. Thankfully, they realized the challenges associated with deep-ocean moorings, and abandoned that idea.

The latest design iteration (misleadingly called “System 001”, as though they haven’t built and tested any other previous to it), is to have a freely-drifting system, avoiding the use of anchors. TOC claim that under the influence of current, wind, and waves, their design will drift faster than the plastic – causing it to accumulate in the U, making for easy pickup. They summarize the concept with a little explainer video on their website, with a representative screen shot below:

Nice how the wind, waves, and current all are going in the same direction!!!

Based on a quick Twitter rant that I had after thinking about all this for a few minutes (see here), I wanted to explain out the various points that have either a) been missed by TOC design team, or b) deliberately excluded from their rosy assessment of how they expect their system to actually collect garbage. What follows is a “first stab” at a physical oceanographic assessment of the basic idea behind “System001”, and what TOC would need to address to convince the community (i.e. scientists, conservationists, etc) that their system is actually worth the millions of dollars going into development and testing.

The premise

As outlined in the video, the premise of System001 as a garbage collection system is that through the combined action of wind, waves, and currents, the U-shaped boom will travel faster through the water than the floating plastic, thereby collecting and concentrating it for eventual removal. This appears to be based on the idea that while both the boom and the plastic will drift with the current, because the boom protrudes from the water (like a sail), it will actually move faster than the surface water by catching wind.

There are some issues with this premise. Or, at least, there are some real aspects of oceanography that have either been ignored or missed in thinking that such a system will behave in the predictable way described by TOC. I’ll try and outline them here.

Stokes drift

Any of you who may have had an introduction to ocean waves may have heard that during the passage of a wave, the water particles move in little circles (often called wave orbital motion). While not a bad “first-order” description, it turns out that for real ocean waves there is also some drift in the direction of wave propagation. This drift is named after Gabriel Stokes, who first described it mathematically in 1847 (see wikipedia article here).

Stokes drift, from https://www.researchgate.net/publication/315739116_Breaching_of_Coastal_Barriers_under_Extreme_Storm_Surges_and_Implications_for_Groundwater_Contamination_Application_of_XBeach_in_Coastal_Flood_Propagation/figures?lo=1

The amount of drift depends nonlinearly on both the amplitude and the wavelength of the wave. For example, for a 0.5m amplitude wave with a wavelength of 10m and period of 10s (something like typical ocean swell), the drift velocity is about 10 cm/s right at the surface.

Of course, the Stokes’ solution describes the motion of the water parcels being moved by the wave. For those water parcels to then have an effect on anything in the water, one would need to consider the various components of force/impulse/momentum (i.e. our buddy Sir Isaac Newton). Needless to say, it seems obvious that a smallish piece of neutrally buoyant plastic will respond to the Stokes drift much more readily than a 600m long floating cylinder with a large mass (and therefore large inertia).

This alone could be enough to quash the idea of a passive propagating collection system.

Ekman currents

While we’re talking about long-dead European fluid mechanics pioneers, any study of the effect of winds and currents wouldn’t be complete without a foray into the theories proposed by Swedish oceanographer Vagn Walfrid Ekman in 1905. What Ekman found was that when the wind blew over the surface of the ocean, the resulting current (forced by friction between the air and the water) didn’t actually move in the same direction as the wind. The reason for this is because of the so-called “Coriolis effect”, whereby objects moving on the surface of the Earth experience an “acceleration” orthogonal to their direction of motion that appears to make them follow a curved path (for those who want to go down the rabbit hole, the Coriolis acceleration is essentially a “fix” for the fact that the surface of the Earth is non-inertial reference frame, and therefore doesn’t satisfy the conditions for Newton’s laws to apply without modification).

Anyway – the consequence is that in an ideal ocean, with a steady wind blowing over the surface, the surface currents actually move at an angle of 45 degrees to the wind direction! Whether it’s to the left or right of the wind depends on which hemisphere you are in – I’ll leave it as an exercise to determine which is which. And what’s cooler, is that the surface current then acts like a frictional layer to the water just below it, causing it to move at an angle, and so on, with the effect being that the wind-forced flow actually makes a SPIRAL that gets smaller with depth. This is known as the Ekman spiral.

Ekman spiral, from http://oceanmotion.org/html/background/ocean-in-motion.htm

The actual depth that the spiral penetrates to depends on a mysterious ocean parameter called Az, which describes the vertical mixing of momentum between the layers – kind of like the friction between them. What is clear though, is that a small particle of plastic floating close to the surface and a 3m deep floating structure will likely not experience the same wind-forced current, and therefore won’t move in the same direction. Hmmm … that’s going to make it hard to pick up pieces of plastic.

What is a “Gyre” anyway?

The final point I wanted to make in this article (I have more, which I’ll summarize at the end for a possible future article), is to try and give a sense of what currents in the ocean (including in the “gyre” or in the region often referred to as the “Great Pacific Garbage Patch”) actually look like. The conception that there is a great swirling current 1000’s of km across is true only when the currents are averaged for a very long time. At any given instant, however, the ocean current field is a mess of flows at various space and time scales. An appropriate term for describing typical ocean flow fields is “turbulent”, as in an oft-viewed video made by NASA from satellite ocean current data.

To illustrate this, I took some screenshots of current conditions from the wonderful atmosphere/ocean visualization tool at earth.nullschool.net showing: ocean currents, surface waves, and wind.

Ocean currents

Ocean waves

Wind

These images illustrate the potential problem with TOC idea, by highlighting the fact that the wind, wave, and current fields of the ocean (including even in the “quiet” garbage patch) are highly variable spatially and temporally, and are almost never aligned at the same period in time. What’s more, is that the currents and waves at a given time and location are not always a result of the wind at that location. Eddies in the ocean are generated through all kinds of different processes, and can propagate across ocean basins before finally dissipating.

Similarly, surface waves have been measured to cross oceans (i.e. the famous “Waves across the Pacific” study pioneered by the transformative oceanographer Walter Munk).

Other issues

Following the “rule of three”, I tried to hit what I consider to be the biggest concerns with TOC system design and principle, from my perspective as a physical oceanographer. However, there are other issues that should be addressed, if the system as designed is really believed by the TOC team to be capable of doing what they say. And really, it seems like a crazy waste of time on behalf of everyone involved to have spent this much time on something if they aren’t sure it will even work theoretically … not to mention the money spent thus far. So, part of me has to believe that all the dozens of people involved care deeply about making something that might actually work, and they have studied and considered all the effects and potential issues I (and others) have raised.

Anyway, the other issues are:

  • What is the actual response of the system to a rapid change in wind/wave direction? Wind can change direction pretty quickly, especially compared to ocean currents. What’s to prevent a bunch of accumulated plastic getting blown out the open end of the U after a 180 degree shift in wind but before the system can re-orient?

  • What about wave reflection from the boom structure itself? It is a well-known fact that objects (even floating ones) can reflect and “scatter” waves (scattering is when the reflected waves have a shorter wavelength than the original ones), and it seems like this could create a wave field in the U that might actually cause drift out of the system.

  • The idea that all wildlife can just “swim under” the skirt (because it’s impermeable) is not supported by anything that I consider to be rigorous fluid mechanics, aside from the fact that much of what actually lives in the open ocean are non-motile or “planktonic” species. There are a lot of communities in the open ocean that float and drift at the surface, and I see no way that if the System collects floating plastic as it is designed that it won’t just sweep up all those species too. The latest EIA brushed off the effect of the System on planktonic organisms by stating that they “are ubiquitous in the world’s oceans and any deaths that occur as a result of the plastic extraction process will not have any population level effects”. But that doesn’t take into account that the stated mission is to deploy 60 such systems, which are estimated to clean the garbage patch of surface material at a rate of 50% reduction every 5 years. It stands to reason that they would also clean the Pacific of its planktonic communities by the same amount.

Recording and replaying plots with the `recordPlot()` function

This post is not going to focus on anything oceanographic, but on a little trick that I just learned about using base graphics in R – the recordPlot() function.

R plot systems

First, for those who either don’t use R or who have been living under a rock, there are (in my opinion) two major paradigms for producing plots from data in R. The first is the original “base graphics” system – the sequence of functions bundled with R that are part of the graphics package which is installed and loaded by default.

The second is the ggplot2 package, written by Hadley Wickham, which uses the “grammar of graphics” approach to plotting data and is definitely not your standard plot(x, y) approach to making nice-looking plots. To be fair, I think ggplot is quite powerful, and I never discourage anyone from using it, but because I don’t use it in my own work (for many reasons too complicated to get into here) I don’t tend to actively encourage it, either.

Storing ggplots in objects

Anyway, one thing that I’ve always liked about the ggplot approach is that the components of the plot can be saved in objects, and built up in pieces by simply adding new plot commands to the object. A typical use case might be like this (using the built-in iris dataset):

library(ggplot2)
ggplot(iris, aes(x=Sepal.Length, y=Petal.Length)) + geom_point()

plot of chunk unnamed-chunk-1

Note how the foundation of the plot is created with the ggplot() function, but the points are added though the + of the geom_point() function.

However, for more complicated plots, the components are often saved into an object which can have other geom_* bits added to it later on. Then the final plot is rendered by “printing” the object:

pl <- ggplot(iris, aes(x=Sepal.Length, y=Petal.Length))
pl <- pl + geom_point() # add the points
pl <- pl + geom_density2d() # add a 2d density overlay
pl # this "prints" the plot and renders it on the screen

plot of chunk unnamed-chunk-2 Admittedly, this is pretty cool, not the least of which because you can always re-render the plot just by printing the object.

Recording a base graphics plot

While not quite the same, I recently discovered that it’s possible to “record” a base graphics plot to save in an object, allowing you to re-render the same plot. The case that led me to stumble onto this was where I had a complicated bit of code that made a plot that fit a model to data, and then I wanted to step through various iterations of removing certain points, adding new ones, to see what the effect on the fit would be.

I often do this using the pdf() function in R, so that each new plot can become another page in the pdf file that can be stepped through. However, another use case that I thought of after is in writing Rmarkdown documents (like this one!), where you’d like to keep showing a base plot but add different elements to it consecutively. Because of the way Rmarkdown works, the graphics from each code chunk are rendered independently, so it’s not possible to say, generate a plot in one chunk (using plot()), and then add to it with points() or lines() in another chunk.

Let’s see an example. I’ll make a plot that has a bunch of pieces, and then save it to an object with recordPlot():

plot(iris$Sepal.Length, iris$Petal.Length,
     xlab='Sepal Length', ylab='Petal Length',
     pch=round(runif(nrow(iris), max=25)))
grid()
title('My base plot')

plot of chunk unnamed-chunk-3

pl <- recordPlot()

Now, if I want to redo the plot exactly as I already did, I just “print” the object:

pl

plot of chunk unnamed-chunk-4

So now if I want to redo the plot, but add different pieces, I can redo the plot as above, and add whatever I want with the normal base graphics functions:

pl # start with the original plot
m <- lm(Petal.Length ~ Sepal.Length, data=iris)
abline(m, lwd=2, col=2)
title(c('', '', 'with a subtitle!'), col.main=2)

plot of chunk unnamed-chunk-5

pl <- recordPlot()

And to keep going I just keep recording the plot and starting each chunk with it:

pl
II <- iris$Petal.Length > 4
points(iris$Sepal.Length[II], iris$Petal.Length[II], col=3, pch=19)
sl <- seq(4, 8, length.out=100)
pl <- predict(m, newdata=list(Sepal.Length=sl), interval='confidence')
lines(sl, pl[,2], lty=2, col=2, lwd=2)
lines(sl, pl[,3], lty=2, col=2, lwd=2)

plot of chunk unnamed-chunk-6

Bootstrapping uncertainties with the boot package

People often ask me what I like about R compared to other popular numerical analysis software commonly used in the oceanographic sciences (coughMatlabcough). Usually the first thing I say is the package system (including the strict rules for package design and documentation), and how easy it is to take advantage of work that others have contributed in a consistent and reproducible way. The second is usually about how the well-integrated the statistics and the statistical methods are in the various techniques. R is fundamentally a data analysis language (by design), something that I’m often reminded of when I am doing statistical analysis or model fitting.

Fitting models, with uncertainties!

Recently I found myself needing to estimate both the slope and x-intercept for a linear regression. Turns out it’s pretty easy for the slope, since it’s one of the directly estimated parameters from the regression (using the lm() function), but it wasn’t as clear how to get the uncertainty for the x-intercept. In steps the boot package, which is a nice interface for doing bootstrap estimation in R. I won’t get into the fundamentals of what bootstrapping involves here (the linked wikipedia article is a great start).

Ok, first, a toy example (which isn’t all that different from my real research problem). We make some data following a linear relationship (with noise):

library(boot)
x <- 0:20
set.seed(123) # for reproducibility
y <- x + rnorm(x)
plot(x, y)

plot of chunk unnamed-chunk-1 We can easily fit a linear model to this using the lm() function, and display the results with the summary() function:

m <- lm(y ~ x)
plot(x, y)
abline(m)

plot of chunk unnamed-chunk-2

summary(m)
## 
## Call:
## lm(formula = y ~ x)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -1.8437 -0.5803 -0.1321  0.5912  1.8507 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  0.37967    0.41791   0.909    0.375    
## x            0.97044    0.03575  27.147   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 0.992 on 19 degrees of freedom
## Multiple R-squared:  0.9749,	Adjusted R-squared:  0.9735 
## F-statistic:   737 on 1 and 19 DF,  p-value: < 2.2e-16

I love lm(). It’s so easy to use, and there’s so much information attached to the result that it’s hard not to feel like you’re a real statistician (even if you’re an imposter like me). Check out the neat little table, showing the estimates of the y-intercept and slope, along with their standard errors, t values, and p values.

So how to get the error on the x-intercept? Well, one way might be to propagate the slope and y-intercept uncertainties through a rearrangement of the equation, but for anything even a little complicated this would be a pain. Let’s do it instead with the boot package.

We need to create a function that takes the data (as the first argument), with the second argument being an index that can be used by the boot() function to run the function with a subset of the data. Let’s demonstrate first by writing a function to calculate the slope, and see how the bootstrapped statistics compare with what comes straight from lm():

slope <- function(d, ind) {
    m <- lm(y ~ x, data=d[ind,])
    coef(m)[[2]]
}
slope_bs <- boot(data.frame(x, y), slope, 999)
slope_bs
## 
## ORDINARY NONPARAMETRIC BOOTSTRAP
## 
## 
## Call:
## boot(data = data.frame(x, y), statistic = slope, R = 999)
## 
## 
## Bootstrap Statistics :
##      original        bias    std. error
## t1* 0.9704362 -0.0009770872   0.0391621

The bootstrap estimate decided (using 999 subsampled replicates) that the value of the slope should be 0.970436155978, while that straight from the linear regression gave 0.970436155978 (i.e. exactly the same!). Interestingly the standard error is slightly higher than from lm(). My guess is that would get closer to the real value with more replicates.

Ok, now to do it for the x-intercept, we just supply a new function:

xint <- function(d, ind) {
    m <- lm(y ~ x, data=d[ind,])
    -coef(m)[[1]]/coef(m)[[2]] # xint = -a/b
}
xint_bs <- boot(data.frame(x, y), xint, 999)
xint_bs
## 
## ORDINARY NONPARAMETRIC BOOTSTRAP
## 
## 
## Call:
## boot(data = data.frame(x, y), statistic = xint, R = 999)
## 
## 
## Bootstrap Statistics :
##       original      bias    std. error
## t1* -0.3912359 -0.02204408    0.412457

So the bootstrap estimate of the x-intercept is -0.39123590.412457