35

Instead of starting to code in Matlab, I recently started learning R, mainly because it is open-source. I am currently working in data mining and machine learning field. I found many machine learning algorithms implemented in R, and I am still exploring different packages implemented in R.

I have quick question: how do you compare R to Matlab for data mining application, its popularity, pros and cons, industry and academic acceptance etc.? Which one would you choose and why?

I went through various comparisons for Matlab vs R against various metrics but I am specifically interested to get answer for its applicability in Data Mining and ML. Since both language are pretty new for me I was just wondering if R would be a good choice or not.

I appreciate any kind of suggestions.

smci
  • 32,567
  • 20
  • 113
  • 146
iinception
  • 1,945
  • 2
  • 21
  • 19
  • See: http://stackoverflow.com/questions/3125527/what-are-the-advantages-disadvantages-between-r-and-matlab-with-respect-to-machin/ – Matti Pastell Jan 27 '11 at 04:53

8 Answers8

61

For the past three years or so, i have used R daily, and the largest portion of that daily use is spent on Machine Learning/Data Mining problems.

I was an exclusive Matlab user while in University; at the time i thought it was an excellent set of tools/platform. I am sure it is today as well.

The Neural Network Toolbox, the Optimization Toolbox, Statistics Toolbox, and Curve Fitting Toolbox are each highly desirable (if not essential) for someone using MATLAB for ML/Data Mining work, yet they are all separate from the base MATLAB environment--in other words, they have to be purchased separately.

My Top 5 list for Learning ML/Data Mining in R:

This refers to a couple things: First, a group of R Package that all begin arules (available from CRAN); you can find the complete list (arules, aruluesViz, etc.) on the Project Homepage. Second, all of these packages are based on a data-mining technique known as Market-Basked Analysis and alternatively as Association Rules. In many respects, this family of algorithms is the essence of data-mining--exhaustively traverse large transaction databases and find above-average associations or correlations among the fields (variables or features) in those databases. In practice, you connect them to a data source and let them run overnight. The central R Package in the set mentioned above is called arules; On the CRAN Package page for arules, you will find links to a couple of excellent secondary sources (vignettes in R's lexicon) on the arules package and on Association Rules technique in general.

The most current edition of this book is available in digital form for free. Likewise, at the book's website (linked to just above) are all data sets used in ESL, available for free download. (As an aside, i have the free digital version; i also purchased the hardback version from BN.com; all of the color plots in the digital version are reproduced in the hardbound version.) ESL contains thorough introductions to at least one exemplar from most of the major ML rubrics--e.g., neural metworks, SVM, KNN; unsupervised techniques (LDA, PCA, MDS, SOM, clustering), numerous flavors of regression, CART, Bayesian techniques, as well as model aggregation techniques (Boosting, Bagging) and model tuning (regularization). Finally, get the R Package that accompanies the book from CRAN (which will save the trouble of having to download the enter the datasets).

  • CRAN Task View: Machine Learning

The +3,500 Packages available for R are divided up by domain into about 30 package families or 'Task Views'. Machine Learning is one of these families. The Machine Learning Task View contains about 50 or so Packages. Some of these Packages are part of the core distribution, including e1071 (a sprawling ML package that includes working code for quite a few of the usual ML categories.)

With particular focus on the posts tagged with Predictive Analytics

A thorough study of the code would, by itself, be an excellent introduction to ML in R.

And one final resource that i think is excellent, but didn't make in the top 5:

posted at the blog A Beautiful WWW

doug
  • 69,080
  • 24
  • 165
  • 199
  • There's no question that MATLAB is not cheap, at least for most people. In my work, I try to avoid using the add-on toolboxes from the MathWorks. As you say, they present an additional cost, but they also limit portability. One of the great things about MATLAB is what comes in the base product: no special libraries are needed for image loading, for instance, so my code will run on anyone's MATLAB. – Predictor Jan 27 '11 at 11:10
  • Oh, my other point was going to be that there is a great deal of statistics and data mining software available free, from the sizable on-line MATLAB community. See MATLAB Central's File Exchange, for example. Universities are another good source of MATLAB code. – Predictor Jan 27 '11 at 11:12
  • Thanks for the answer. I really appreciate it. I will take a look at the book you mentioned. – iinception Jan 27 '11 at 16:21
  • 3
    In addition people wiritng the book not only have had their methods implemented in R but are contributers themselves! – Jay Jan 27 '11 at 20:35
  • @Predictor, the great thing about R is that it will always run on every machine, no matter how many extension packages you install. – Paul Hiemstra Nov 17 '11 at 22:32
  • MATLAB will likewise run on any machine, and I believe that all toolboxes (from the MATLAB vendor) will are supported on all platforms. The portability limitation of any add-on, regardless of the base software) is that the add-on will be needed by any programmer you want to pass your code to, since your code will rely on the add-on. – Predictor Dec 19 '11 at 00:48
9

Please look at the CRAN Task Views and in particular at the CRAN Task View on Machine Learning and Statistical Learning which summarises this nicely.

Dirk Eddelbuettel
  • 360,940
  • 56
  • 644
  • 725
2

Both Matlab and R are good if you are doing matrix-heavy operations. Because they can use highly optimized low-level code (BLAS libraries and such) for this.

However, there is more to data-mining than just crunching matrixes. A lot of people totally neglect the whole data organization aspect of data mining (as opposed to say, plain machine learning).

And once you get to data organization, R and Matlab are a pain. Try implementing an R*-tree in R or matlab to take an O(n^2) algorithm down to O(n log n) runtime. First of all, it totally goes against the way R and Matlab are designed (use bulk math operations wherever possible), secondly it will kill your performance. Interpreted R code for example seems to run at around 50% of the speed of the C code (try R built-in k-means vs. flexclus k-means); and the BLAS libraries are optimized to an insane level, exploiting cache sizes, data alignment, advanced CPU features. If you are adventurous, try implementing a manual matrix multiplication in R or Matlab, and benchmark it against the native one.

Don't get me wrong. There is a lot of stuff where R and matlab are just elegant and excellent for prototyping. You can solve a lot of things in just 10 lines of code, and get a decent performance out of it. Writing the same thing by hand would be hundreds of lines, and probably 10x slower. But sometimes you can optimize by a level of complexity, which for large data sets does beat the optimized matrix operations of R and matlab.

If you want to scale up to "Hadoop size" on the long run, you will have to think about data layout and organization, too, unless all you need is a linear scan over the data. But then, you could just be sampling, too!

Has QUIT--Anony-Mousse
  • 76,138
  • 12
  • 138
  • 194
1

Yesterday I found two new books about Data mining. These series of books entitled by ‘Data Mining’ address the need by presenting in-depth description of novel mining algorithms and many useful applications. In addition to understanding each section deeply, the two books present useful hints and strategies to solving problems in the following chapters.The progress of data mining technology and large public popularity establish a need for a comprehensive text on the subject. Books are: “New Fundamental Technologies in Data Mining” here http://www.intechopen.com/books/show/title/new-fundamental-technologies-in-data-mining & “Knowledge-Oriented Applications in Data Mining” here http://www.intechopen.com/books/show/title/knowledge-oriented-applications-in-data-mining These are open access books so you can download it for free or just read on online reading platform like I do. Cheers!

Jonny
  • 11
  • 1
1

We should not forget the origin sources for these two software: scientific computation and also signal processing leads to Matlab but statistics leads to R.

I used matlab a lot in University since we have one installed on Unix and open to all students. However, the price for Matlab is too high especially compared to free R. If your major focus is not on matrix computation and signal processing, R should work well for your needs.

Leo5188
  • 1,967
  • 2
  • 17
  • 21
1

I think it also depends in which field of study you are. I know of people in coastal research that use a lot of Matlab. Using R in this group would make your life more difficult. If a colleague has solved a problem, you can't use it because he fixed it using Matlab.

Paul Hiemstra
  • 59,984
  • 12
  • 142
  • 149
0

I would also look at the capabilities of each when you are dealing with large amounts of data. I know that R can have problems with this, and might be restrictive if you are used to an iterative data mining process. For example looking at multiple models concurrently. I don't know if MATLAB has a data limitation.

Nightfirecat
  • 11,432
  • 6
  • 35
  • 51
-1

I admit to favoring MATLAB for data mining problems, and I give some of my reasoning here:

Why MATLAB for Data Mining?

I will admit to only a passing familiarity with R/S-Plus, but I'll make the following observations:

  1. R definitely has more of a statistical focus than MATLAB. I prefer building my own tools in MATLAB, so that I know exactly what they're doing, and I can customize them, but this is more of a necessity in MATLAB than it would be in R.

  2. Code for new statistical techniques (spatial statistics, robust statistics, etc.) often appears early in S-Plus (I assume that this carries over to R, at least some).

  3. Some years ago, I found the commercial version of R, S-Plus to have an extremely limited capacity for data. I cannot say what the state of R/S-Plus is today, but you may want to check if your data will fit into such tools comfortably.

Predictor
  • 984
  • 6
  • 9
  • 3
    S-Plus is not "commercial version of R". – Marek Jan 27 '11 at 12:09
  • 7
    Typically new statistical techniques were written in R, and __then__ ported to S-Plus. – hadley Jan 27 '11 at 14:44
  • Marek, can you comment on the data capacity of R? – Predictor Jan 27 '11 at 17:53
  • 2
    R has a number of ways to deal with data and different data structures. The main methods are in-memory but one change read lines/chunks, work directly with database interfaces, a number of different file types and HPC structures for "big" data. – Jay Jan 27 '11 at 18:45
  • I like R because R comes with infitely more "pre-baked" then Matlab does in my personal use-case and like Matlab I can extend it in ways I like with the language. – Jay Jan 27 '11 at 18:47
  • Could anyone give a more specific idea as to the data capacity of R? For instance, assuming one is building models, how many observations and candidate predictors might we be able to use? – Predictor Jan 29 '11 at 10:25
  • 1
    That depends a great deal on what kind of "columns", what you are doing with the data, your hardware, etc... To give a extremely rough answer I have not had problems working with 1e6 - 1e7 cases with 10's to 100+ variables...but I'm not sure how meaningful that answer is. R is free, give it a try. Being familar with matlab, it should be okay. There is even a matlab emulation package for some of the more commnly used syntax. – Jay Jan 31 '11 at 19:17
  • R's data capacity is no less nor any greater than (well, okay, maybe greater than) any other language. As @Dirk said, just familiarize yourself with the large data packages. – Iterator Sep 08 '11 at 13:53
  • I worked with datasets of 35 GB, trying to estimate a covariance matrix from it. I ended up reading the data in chunks. So I think that the ability to work with large datasets is also a matter of programming correctly. And in regard to S-plus, they are adopting the R packaging system. I think that says enough about who is leading in terms of statistical techniques :). – Paul Hiemstra Nov 17 '11 at 22:38