1

I want to execute an event study to analyse how several events affected the stock price returns of several companies. To do this, I want to use the estudy 2 package. Unfortunately, the companies have different event dates. Now I would like to know if it is possible to modify the approach to do this.

Furthermore I would like to observe not only american but german companies. Is it sufficient to modify the "same_regressor_for_all"-argument from "TRUE" to "FALSE"?

I prepared a sample shown below. I used 3 american and 2 german banks.

Attention!!! The date index of Germany is different than the one of the US. The reason is that the 17th of January 2000 was a public holiday in the US (Martin Luther King Day). I chose this period on purpose because I want to run the event study on the date index specified before.

So for eaxample for Deutsche Bank if the event took place on the 01.02.200 I use the 2 days before and 2 days after. For Goldman Sachs the event occured on the 03.02.2000. So I observe the 2 days before and after the event.

Unfortunately I don't know how to modify the approach to do this.

The event dates are:

GS:  03.02.2000
MS:  02.02.2000
JPM: 04.02.2000

DBK: 01.02.2000
CBK: 03.02.2000

These are dummy values.

Part 1. How I bound the data:
GER_date_R <- c("2000-1-3","2000-1-4","2000-1-5","2000-1-6","2000-1-7",
       "2000-1-10","2000-1-11","2000-1-12","2000-1-13","2000-1-14",
       "2000-1-17","2000-1-18","2000-1-19","2000-1-20","2000-1-21",
       "2000-1-24","2000-1-25","2000-1-26","2000-1-27","2000-1-28",
       "2000-1-31","2000-2-1","2000-2-2","2000-2-3","2000-2-4")

GER_date_R <- as.character(GER_date_R)
GER_date_R <- as.Date(GER_date_R, format = "%Y-%m-%d")
str(GER_date_R)

DBK.DE <- c(0.012340327,-0.022039749,-0.028251472,0.014225388,0.031966812,
        -0.002190911,-0.006050396,0.008241307,-0.012387681,0.008275408,
        0.00548048,-0.006303564,0.013112874,-0.02333781,0.000277109,
        -0.017083628,-0.044775292,-0.008604375,-0.001786219,-0.019593881,
        -0.018436795,0.015385628,0.004569298,0.0030374,0.002420763)

CBK.DE <- c(-0.001671374,-0.051582175,-0.013036244,0.049699503,0.029853558,
        -0.026102911,0.003496603,-0.030546206,0.035946042,0.006561143,
        0.026171256,-0.014937272,-0.002944216,-0.007694501,-0.019681667,
        -0.023420784,-0.027674042,0.013428735,-0.003278454,-0.008367162,
        -0.019802437,0.024375547,0.043283269,0.007737291,-0.001567129)

return_GER <- data.frame(date = GER_date_R, DBK = DBK.DE, CBK = CBK.DE)
return_GER
str(return_GER)

##############################################
US_date_R <- c("2000-1-3","2000-1-4","2000-1-5","2000-1-6","2000-1-7",
           "2000-1-10","2000-1-11","2000-1-12","2000-1-13","2000-1-14",
           "2000-1-18","2000-1-19","2000-1-20","2000-1-21","2000-1-24",
           "2000-1-25","2000-1-26","2000-1-27","2000-1-28","2000-1-31",
           "2000-2-1","2000-2-2","2000-2-3","2000-2-4","2000-2-7")

US_date_R <- as.character(US_date_R)
US_date_R <- as.Date(US_date_R, format = "%Y-%m-%d")
str(US_date_R)

GS<- c(-0.064406221,-0.065057545,-0.047960041,0.041898988,0.003792431,
   0.021715491,0.00517224,-0.00517224,0.001480953,0.013225728,
   0.014492779,0.007882152,-0.017361031,-0.0029133,-0.047041082,
   0.025661536,0.050846454,0.008462544,-0.022728349,0.051796241,
   -0.004101312,-0.001370574,-0.065179089,-0.008085251,0.01246806)

JPM <- c(-0.063948729,-0.019354503,-0.006191619,0.01409699,0.018206315,
     -0.017331741,-0.023883937,0.006246913,0.015011815,0.035302668
     ,-0.039694822,0.043073244,-0.000843589,-0.015306642,0.000856596
     ,0.022016324,0.049826047,0.02828156,-0.050844873,0.050844873
     ,0.037252783,-0.016554029,0.009814604,-0.017430814,0.005336916)

MS <- c(-0.055820479,-0.076961086,-0.03718282,0.019018842,0.030586737
    ,0.037934983,0,0.00762686,0.006153419,0.036148567,-0.026286968
    ,0.014382269,-0.006469604,-0.017302155,-0.034552748,0.011168079
    ,0.026210827,0.016791021,-0.0649697,0.045352521,-0.034552748
    ,0.017425449,0.013346586,-0.00474646,-0.008600126)

return_US <- data.frame(date = US_date_R, GS, JPM, MS)
return_US
str(return_US)

##############################################
DAX <- c(-0.03025716,-0.024564608,-0.012969888,-0.00418432,0.046182439,
     0.021094463,-0.004960651,0.00312373,0.006225498,0.030752959,
     0.011873611,-0.026067979,0.002671711,0.003044296,-0.017002419,
     -0.008726935,-0.017807688,0.023185575,0.022243444,-0.008388821,
     -0.033235209,0.030948593,0.017084754,0.025102094,0.012210557)

return_DAX <- data.frame(date = GER_date_R, DAX = DAX)
str(return_DAX)
return_DAX

##############################################
SP500<- c(-0.009594995,-0.039099176,0.001920338,0.000955222,0.02672995,
      0.011127825,-0.013148577,-0.00439602,0.012096245,0.010614763,
      -0.006855517,0.000522156,-0.007120613,-0.002916568,-0.028022584,
      0.006046484,-0.004221619,-0.003946204,-0.027840813,0.024904851,
      0.010571741,-0.000113564,0.011185348,-0.000421133,-9.12761E-05)

return_SP500 <- data.frame(date = US_date_R, SP500 = SP500)
str(return_SP500)
return_SP500

##############################################

This first part just shows how I created the data. This data is downloaded from Yahoo but the principle is the same. ### Part 2: The estimation

return_SP500

try_SP <- return_SP500[c(1:25),]
try_SP
try_SP <- try_SP %>%         
mutate(date_index = 1:n(),    
     event_index = max(ifelse(as.Date("2000-02-03") == date, date_index, 0)),
     event_time = date_index - event_index)        

try_SP

try_SP_2 <- try_SP %>%
filter(event_time >= -2 & event_time <= 2)            

try_SP_2        

 min(try_SP_2$date)        
 max(try_SP_2$date)        
##############################################   


try_return <- apply_market_model(rates = return_US, 
                             regressor = return_SP500, 
                             market_model = "sim", 
                             same_regressor_for_all = TRUE,
                             estimation_method = "ols",
                             estimation_start = min(try_SP$date), 
                             estimation_end = min(try_SP_2$date)-2)        

##########################################    

parametric_tests(list_of_returns = try_return,
             event_start = min(try_SP_2$date),
             event_end = max(try_SP_2$date))    



###############################################################      
nonparametric_tests(list_of_returns = try_return,
                event_start = min(try_SP_2$date),
                event_end = max(try_SP_2$date)) 
Greg
  • 21
  • 4
  • For german-vs-american, is there something that suggests the analysis is specific to american companies (and does not work/apply with german-company data)? It would help others help you if you provided a reproducible example, for instance showing how you formed `try_all` and `try_index`. – r2evans Jan 27 '19 at 18:02
  • I am analyzing how the company specific events affect their stock price. The idea is less about analyzing american vs. german companies but about sectors. I am interested if there is a bigger effect in lets say the financial sector than in the automotive sector. – Greg Jan 29 '19 at 19:06
  • Okay. Please try to make this reproducible by including sample data (e.g., `dput(head(x))`), and include if you are using non-base packages (other than an inferred `dplyr`). Refs: https://stackoverflow.com/questions/5963269, https://stackoverflow.com/help/mcve, and https://stackoverflow.com/tags/r/info. – r2evans Jan 29 '19 at 19:11
  • Hi, thanks for your support. Is it possible to upload data here in stackoverflow? I already downloaded some data from yahoo to prepare an example to test. Of course I have to modify it, but this shouldn't take too long. – Greg Jan 29 '19 at 19:20
  • "Upload data"? Yes, but not in bulk. If at all possible, please provide unambiguous data, *just enough* to demonstrate the issue/intent. If that's 2 rows and 3 columns or 20 rows and 6 columns, that's preferred, and "unambiguous" means something like the output from `dput(head(x,n=10))` (dput is important). I know it can be hard sometimes to reduce a problem in this way. Some questions "get away" with large-ish amounts of data, but the bottom lines: (1) inambiguous; (2) easy for us to use; (3) once it becomes onerous/confusing, people stop "playing" with the data/code. – r2evans Jan 29 '19 at 22:31
  • First of all, thanks a lot for your support. I spend about 4 hours to prepare a proper example. I really hope you understand what I want to do. I tested the formulas. They should work. – Greg Jan 30 '19 at 20:42
  • I know that developing a completely self-contained question can be time-consuming, so I appreciate your efforts on that. I've often found that while forming the question, I find my mistake and/or a good-enough work-around. – r2evans Jan 30 '19 at 22:14
  • Hey, did anybody find a solution how to modify the approach, so I can apply it on my data? – Greg Feb 03 '19 at 12:39
  • As I'm playing with the data, Greg, it becomes clear to me that I don't know what `estudy2` is doing under the hood, so I cannot comment as to the *logical* portion of your question. I was hoping it would become apparent to me due to an error, warning, or something else "obvious" during execution. I don't know if the [package](https://github.com/irudnyts/estudy2) maintainer is available, I suspect he could answer this much faster than I could. Further, since this is about the statistical methods employed, it is likely more appropriate for [Cross Validated](http://stats.stackexchange.com/). – r2evans Feb 03 '19 at 17:40
  • FYI: the first 82 lines of your code block for forming the data can be replaced with the output from four commands (after forming the variables in use): `dput(return_DAX); dput(return_GER); dput(return_US); dput(return_SP500)`: once formed, the forming variables are otherwise unnecessary. And please include your packages: `library(dplyr); library(estudy2)`. – r2evans Feb 03 '19 at 17:45

0 Answers0