Apache Spark 2:0: What You Need to Know

Datetime:2016-08-23 04:16:09          Topic: Spark           Share

Since its launch original launch in 2009, Apache Spark™ has made phenomenal strides, driven in large part by a passionate open-source community. That community has now come together to offer Apache Spark™ 2.0 — and with this latest release, the codebase takes a huge leap forward.

To give you a sense of the scope and scale of this release —

As per our latest count, there were 2590 JIRAs (new features and bug fixes), from 309 contributors worldwide in Apache Spark™ 2.0  Here at the Spark Technology Center, we have very detailed metrics of this release. As an example, did you know that the average lifecycle of a JIRA (creation to resolution) in 2.0 release was 63 days - And do you know how that compares to 1.6 release?

Christian Kadner of the Spark Tech Center team digs deep into the Git logs and JIRA metrics, and will share an update on the same.  In fact, over the next few weeks, the STC team will share their analysis of many of the significant features of Apache Spark™, and outlook on the same.

The Spark Technology Center focuses efforts on expanding Spark's core technology to make it enterprise and cloud ready — with the aim of accelerating the business value of Spark and driving intelligence into business applications. With our growing pool of contributors (50 team members worldwide - including two committers), we've crunched out over 422 commits to Spark 2.0 in the areas of Spark Core, Spark R, SQL, MLlib, Streaming, PySpark, and more. You can always see the latest at http://jiras.spark.tc .

All this amounts to over 18,600 lines of new code in the 2.0 release. Our largest contribution is in the area of Spark SQL with over 10,200 lines of new code, followed by Machine Learning (Spark ML and PySpark) with over 6,900 lines of new code.

Here are some of the features of Spark SQL where the Spark Technology Center has major contributions. (Detailed blogs to follow on these.)

  • Comprehensive native SQL parser
  • Native support for DDL commands
  • Native view support with SQL generation
  • Enhanced catalyst Analyzer/Optimizer
  • Native support for bucketed table
  • Enhanced error handling and test coverage

We are also actively involved in the following features:

  • New SparkSession replacing the SQLContext
  • Whole-stage code generation
  • DataFrame/Dataset API 
  • Additional SQL2003 compliance support
  • Subquery enhancement
  • Vectorized parquet encoder

Check out the Apache Spark 2.0 release notes .

We are already hearing positive feedback on the performance improvements of 2.0, and excitement about the new capabilities. And here at the Spark Technology Center, we are continuing on our journey to make Apache Spark™ the Analytics Operating System.

Sincere acknowledgements to the extended Spark Technology Center team for pulling together the content, editing, and reviews of this blog.

Next: in-depth posts on Apache Spark 2.0, includingBerni Schiefer on Spark SQL performance, andNick Pentreath on what's next in Machine Learning.

Register for Spark Summit EU





About List