Apache Spark

Course Overview:

This five-day hands-on training course delivers the key concepts and expertise developers need to use Apache Spark to develop high-performance parallel applications. Participants will learn how to use Spark Core and Spark SQL to query structured data and Spark Streaming to perform real-time processing on streaming data from a variety of sources.

Developers will also practice writing applications that use core Spark to perform ETL processing and iterative algorithms. The course covers how to work with “big data” stored in a distributed file system and execute Spark applications on a Hadoop cluster. After taking this course, participants will be prepared to face real-world challenges and build applications to execute faster decisions, better decisions, and interactive analysis, applied to a wide variety of use cases, architectures, and industries.

Course Objectives:

  • Utilize How the Apache Hadoop ecosystem fits in with the data processing lifecycle
  • How data is distributed, stored, and processed in a Hadoop cluster
  • How to write, configure, and deploy Apache Spark applications on a Hadoop cluster
  • How to use the Spark shell and Spark applications to explore, process, and analyze distributed data
  • How to query data using RDD, Spark SQL, DataFrames, and Datasets
  • How to use Spark Streaming to process a live data stream

Target Audience:

  • Anyone who knows SQL

Pre-requisites:

  • Basic understanding of distributed frameworks and any object-oriented language

Course Duration:

  • 35 hours – 5 days

Course Content:

Scala primer

  • A quick introduction to Scala
  • Labs : Getting know Scala

Spark Basics

  • Background and history
  • Spark and Hadoop
  • Spark concepts and architecture
  • Spark eco system (core, spark sql, mlib, streaming)
  • Labs : Installing and running Spark

First Look at Spark

  • Running Spark in local mode
  • Spark web UI
  • Spark shell
  • Analyzing dataset – part 1
  • Inspecting RDDs
  • Labs: Spark shell exploration

RDDs

  • RDDs concepts
  • Partitions
  • RDD Operations / transformations
  • RDD types
  • Key-Value pair RDDs
  • MapReduce on RDD
  • Caching and persistence
  • Labs : creating & inspecting RDDs; Caching RDDs

Spark API programming

  • Introduction to Spark API / RDD API
  • Submitting the first program to Spark
  • Debugging / logging
  • Configuration properties
  • Labs : Programming in Spark API, Submitting jobs

Spark SQL

  • SQL support in Spark
  • Dataframes
  • Defining tables and importing datasets
  • Querying data frames using SQL
  • Storage formats : JSON / Parquet
  • Labs : Creating and querying data frames; evaluating data formats

MLlib

  • MLlib intro
  • MLlib algorithms
  • Labs : Writing MLib applications

GraphX

  • GraphX library overview
  • GraphX APIs
  • Labs : Processing graph data using Spark

Spark Streaming

  • Streaming overview
  • Evaluating Streaming platforms
  • Streaming operations
  • Sliding window operations
  • Labs : Writing spark streaming applications

Spark and Hadoop

  • Hadoop Intro (HDFS / YARN)
  • Hadoop + Spark architecture
  • Running Spark on Hadoop YARN
  • Processing HDFS files using Spark

Spark Performance and Tuning

  • Broadcast variables
  • Accumulators
  • Memory management & caching

Spark Operations

  • Deploying Spark in production
  • Sample deployment templates
  • Configurations
  • Monitoring
  • Troubleshooting

 

Course Customization Options

To request a customized training for this course, please contact us to arrange.

Best selling courses

CLOUD COMPUTING

Enterprise Architecture

DATA SCIENCE

Tableau Basic

ARTIFICIAL INTELLIGENCE / MACHINE LEARNING / DEEP LEARNING

RPA with UiPath

PROGRAMMING / CODING

MATLAB Fundamentals