Apache Spark and Scala Certification Training

[vc_row full_width=”stretch_row” equal_height=”yes” css=”.vc_custom_1561982597559{background-color: #f6f6f7 !important;}”][vc_column width=”1/2″][vc_tta_accordion color=”peacoc” active_section=”1″][vc_tta_section title=”Introduction to Big Data Hadoop and Spark” tab_id=”1559286383409-ab730398-6c03″][vc_column_text]

Learning Objectives:

     Understand Big Data and its components such as HDFS. You will learn about the Hadoop Cluster Architecture and you will also get an introduction to Spark and you will get to know about the difference between batch processing and real-time processing.
Topics:
  • What is Big Data?
  • Big Data Customer Scenarios
  • Limitations and Solutions of Existing Data Analytics Architecture with Uber Use Case
  • How Hadoop Solves the Big Data Problem?
  • What is Hadoop?
  • Hadoop’s Key Characteristics
  • Hadoop Ecosystem and HDFS
  • Hadoop Core Components
  • Rack Awareness and Block Replication
  • YARN and its Advantage
  • Hadoop Cluster and its Architecture
  • Hadoop: Different Cluster Modes

[/vc_column_text][/vc_tta_section][vc_tta_section title=”Introduction to Scala for Apache Spark” tab_id=”1559286522681-3bf94e12-e7b7″][vc_column_text]

Learning Objectives:

     Learn the basics of Scala that are required for programming Spark applications. You will also learn about the basic constructs of Scala such as variable types, control structures, collections such as Array, ArrayBuffer, Map, Lists, and many more.
Topics:
  • What is Scala?
  • Why Scala for Spark?
  • Scala in other Frameworks
  • Introduction to Scala REPL
  • Basic Scala Operations
  • Variable Types in Scala
  • Control Structures in Scala

[/vc_column_text][/vc_tta_section][vc_tta_section title=”Functional Programming and OOPs Concepts in Scala” tab_id=”1561382593569-b1979b66-b066″][vc_column_text]

Learning Objectives:

     In this module, you will learn about object-oriented programming and functional programming techniques in Scala.
Topics:
  • Functional Programming
  • Higher Order Functions
  • Anonymous Functions
  • Class in Scala
  • Getters and Setters
  • Custom Getters and Setters
  • Properties with only Getters
  • Auxiliary Constructor and Primary Constructor
  • Singletons
  • Extending a Class
  • Overriding Methods
  • Traits as Interfaces and Layered Traits

[/vc_column_text][/vc_tta_section][vc_tta_section title=”Deep Dive into Apache Spark Framework” tab_id=”1561382595833-dd54d407-26c0″][vc_column_text]

Learning Objectives:

     Understand Apache Spark and learn how to develop Spark applications. At the end, you will learn how to perform data ingestion using Sqoop.
Topics:
  • Spark’s Place in Hadoop Ecosystem
  • Spark Components & its Architecture
  • Spark Deployment Modes
  • Introduction to Spark Shell
  • Writing your first Spark Job Using SBT
  • Submitting Spark Job
  • Spark Web UI
  • Data Ingestion using Sqoop

[/vc_column_text][/vc_tta_section][vc_tta_section title=”Playing with Spark RDDs” tab_id=”1561539494944-84442b8f-acc3″][vc_column_text]

Learning Objectives:

     Get an insight of Spark – RDDs and other RDD related manipulations for implementing business logics (Transformations, Actions and Functions performed on RDD).
Topics:
  • Challenges in Existing Computing Methods
  • Probable Solution & How RDD Solves the Problem
  • What is RDD, It’s Operations, Transformations & Actions
  • Data Loading and Saving Through RDDs
  • Key-Value Pair RDDs
  • Other Pair RDDs, Two Pair RDDs
  • RDD Lineage
  • RDD Persistence
  • WordCount Program Using RDD Concepts
  • RDD Partitioning & How It Helps Achieve Parallelization
  • Passing Functions to Spark

[/vc_column_text][/vc_tta_section][vc_tta_section title=”DataFrames and Spark SQL” tab_id=”1561977097029-1c44c572-d5f1″][vc_column_text]

Learning Objectives:

     In this module, you will learn about SparkSQL which is used to process structured data with SQL queries, data-frames and datasets in Spark SQL along with different kind of SQL operations performed on the data-frames. You will also learn about the Spark and Hive integration.
Topics:
  • Need for Spark SQL
  • What is Spark SQL?
  • Spark SQL Architecture
  • SQL Context in Spark SQL
  • User Defined Functions
  • Data Frames & Datasets
  • Interoperating with RDDs
  • JSON and Parquet File Formats
  • Loading Data through Different Sources
  • Spark – Hive Integration

[/vc_column_text][/vc_tta_section][vc_tta_section title=”Compare MapReduce with Spark.” tab_id=”1584462598123-37d0995d-63dd”][vc_column_text]

Criteria MapReduce Spark
Processing speed Good Excellent (up to 100 times faster)
Data caching Hard disk In-memory
Performing iterative jobs Average Excellent
Dependency on Hadoop Yes No
Machine Learning applications Average Excellent

[/vc_column_text][/vc_tta_section][vc_tta_section title=”What is Apache Spark?” tab_id=”1584462599110-5be5f5ac-609a”][vc_column_text]Spark is a fast, easy-to-use, and flexible data processing framework. It has an advanced execution engine supporting a cyclic data flow and in-memory computing. Apache Spark can run standalone, on Hadoop, or in the cloud and is capable of accessing diverse data sources including HDFS, HBase, and Cassandra, among others.

[/vc_column_text][/vc_tta_section][vc_tta_section title=”Explain the key features of Spark.” tab_id=”1584462599679-7e047c81-4ee9″][vc_column_text]

  • Apache Spark allows integrating with Hadoop.
  • It has an interactive language shell, Scala (the language in which Spark is written).
  • Spark consists of RDDs (Resilient Distributed Datasets), which can be cached across the computing nodes in a cluster.
  • Apache Spark supports multiple analytic tools that are used for interactive query analysis, real-time analysis, and graph processing

[/vc_column_text][/vc_tta_section][vc_tta_section title=”Define RDD.” tab_id=”1584462600533-68a91999-4402″][vc_column_text]RDD is the acronym for Resilient Distribution Datasets—a fault-tolerant collection of operational elements that run in parallel. The partitioned data in an RDD is immutable and distributed. There are primarily two types of RDDs:

  • Parallelized collections: The existing RDDs running in parallel with one another
  • Hadoop datasets: Those performing a function on each file record in HDFS or any other storage system

[/vc_column_text][/vc_tta_section][vc_tta_section title=”What does a Spark Engine do?” tab_id=”1584462601090-a0effb76-8e7d”][vc_column_text]A Spark engine is responsible for scheduling, distributing, and monitoring the data application across the cluster.[/vc_column_text][/vc_tta_section][vc_tta_section title=”Define Partitions.” tab_id=”1584462601660-f923f756-8bc3″][vc_column_text]As the name suggests, a partition is a smaller and logical division of data similar to a ‘split’ in MapReduce. Partitioning is the process of deriving logical units of data to speed up data processing. Everything in Spark is a partitioned RDD.[/vc_column_text][/vc_tta_section][vc_tta_section title=”What operations does an RDD support?” tab_id=”1584462604440-90ad3185-ca9f”][vc_column_text]

  • Transformations
  • Actions

[/vc_column_text][/vc_tta_section][vc_tta_section title=” What do you understand by Transformations in Spark?” tab_id=”1584462605300-0ceafc61-1d77″][vc_column_text]Transformations are functions applied to RDDs, resulting in another RDD. It does not execute until an action occurs. Functions such as map() and filer() are examples of transformations, where the map() function iterates over every line in the RDD and splits into a new RDD. The filter() function creates a new RDD by selecting elements from the current RDD that passes the function argument.[/vc_column_text][/vc_tta_section][vc_tta_section title=” Define Actions in Spark.” tab_id=”1584462605881-8e6c6432-43e9″][vc_column_text]In Spark, an action helps in bringing back data from an RDD to the local machine. They are RDD operations giving non-RDD values. The reduce() function is an action that is implemented again and again until only one value if left. The take() action takes all the values from an RDD to the local node.

[/vc_column_text][/vc_tta_section][vc_tta_section title=”Define the functions of Spark Core.” tab_id=”1584462606619-6a44365b-7a69″][vc_column_text]Serving as the base engine, Spark Core performs various important functions like memory management, monitoring jobs, providing fault-tolerance, job scheduling, and interaction with storage systems.[/vc_column_text][/vc_tta_section][/vc_tta_accordion][/vc_column][vc_column width=”1/2″][vc_tta_accordion color=”peacoc” active_section=”1″][vc_tta_section title=”Machine Learning using Spark MLlib” tab_id=”1561382561432-7f73ef2a-cc67″][vc_column_text]

Learning Objectives:

     Learn why machine learning is needed, different Machine Learning techniques/algorithms, and SparK MLlib.
Topics:
  • Why Machine Learning?
  • What is Machine Learning?
  • Where Machine Learning is Used?
  • Face Detection: USE CASE
  • Different Types of Machine Learning Techniques
  • Introduction to MLlib
  • Features of MLlib and MLlib Tools
  • Various ML algorithms supported by MLlib

[/vc_column_text][/vc_tta_section][vc_tta_section title=”Deep Dive into Spark MLlib” tab_id=”1561382561455-654071d3-eb53″][vc_column_text]

Learning Objectives:

     Implement various algorithms supported by MLlib such as Linear Regression, Decision Tree, Random Forest and many more.
Topics:
  • Supervised Learning – Linear Regression, Logistic Regression, Decision Tree, Random Forest
  • Unsupervised Learning – K-Means Clustering & How It Works with MLlib
  • Analysis on US Election Data using MLlib (K-Means)

[/vc_column_text][/vc_tta_section][vc_tta_section title=”Understanding Apache Kafka and Apache Flume” tab_id=”1561382611424-56181e07-6453″][vc_column_text]

Learning Objectives:

     Understand Kafka and its Architecture. Also, learn about Kafka Cluster, how to configure different types of Kafka Cluster. Get introduced to Apache Flume, its architecture and how it is integrated with Apache Kafka for event processing. At the end, learn how to ingest streaming data using flume.
Topics:
  • Need for Kafka
  • What is Kafka?
  • Core Concepts of Kafka
  • Kafka Architecture
  • Where is Kafka Used?
  • Understanding the Components of Kafka Cluster
  • Configuring Kafka Cluster
  • Kafka Producer and Consumer Java API
  • Need of Apache Flume
  • What is Apache Flume?

[/vc_column_text][/vc_tta_section][vc_tta_section title=”Apache Spark Streaming – Data Sources” tab_id=”1561382613753-7c9c9136-4ca1″][vc_column_text]

Learning Objectives:

     In this module, you will learn about the different streaming data sources such as Kafka and flume. At the end of the module, you will be able to create a spark streaming application.
Topics:
  • Apache Spark Streaming: Data Sources
  • Streaming Data Source Overview
  • Apache Flume and Apache Kafka Data Sources
  • Example: Using a Kafka Direct Data Source
  • Perform Twitter Sentimental Analysis Using Spark Streaming

[/vc_column_text][/vc_tta_section][vc_tta_section title=”In-class Project” tab_id=”1561382614729-6b63842b-62b1″][vc_column_text]

Learning Objectives:

     Work on an end-to-end Financial domain project covering all the major concepts of Spark taught during the course.

 

[/vc_column_text][/vc_tta_section][vc_tta_section title=”Spark GraphX (Self-Paced)” tab_id=”1561382651188-733a0ad7-4f7c”][vc_column_text]

Learning Objectives:

     In this module, you will be learning the key concepts of Spark GraphX programming and operations along with different GraphX algorithms and their implementations.

 

[/vc_column_text][/vc_tta_section][vc_tta_section title=”What is RDD Lineage?” tab_id=”1584462607991-2b439490-dd89″][vc_column_text]Spark does not support data replication in memory and thus, if any data is lost, it is rebuild using RDD lineage. RDD lineage is a process that reconstructs lost data partitions. The best thing about this is that RDDs always remember how to build from other datasets.[/vc_column_text][/vc_tta_section][vc_tta_section title=”What is Spark Driver?” tab_id=”1584462608688-d60aee41-492b”][vc_column_text]Spark driver is the program that runs on the master node of a machine and declares transformations and actions on data RDDs. In simple terms, a driver in Spark creates SparkContext, connected to a given Spark Master. It also delivers RDD graphs to Master, where the standalone Cluster Manager runs.[/vc_column_text][/vc_tta_section][vc_tta_section title=”What is Hive on Spark?” tab_id=”1584462609355-a4c32e8e-477b”][vc_column_text]Hive contains significant support for Apache Spark, wherein Hive execution is configured to Spark:

hive> set spark.home=/location/to/sparkHome;
hive> set hive.execution.engine=spark;

Hive supports Spark on YARN mode by default.[/vc_column_text][/vc_tta_section][vc_tta_section title=”Name the commonly used Spark Ecosystems.” tab_id=”1584462609920-a492216e-bb73″][vc_column_text]

  • Spark SQL (Shark) for developers
  • Spark Streaming for processing live data streams
  • GraphX for generating and computing graphs
  • MLlib (Machine Learning Algorithms)
  • SparkR to promote R programming in the Spark engine

[/vc_column_text][/vc_tta_section][vc_tta_section title=”Define Spark Streaming.” tab_id=”1584462610399-2234ed11-1c2e”][vc_column_text]Spark supports stream processing—an extension to the Spark API allowing stream processing of live data streams. Data from different sources like Kafka, Flume, Kinesis is processed and then pushed to file systems, live dashboards, and databases. It is similar to batch processing in terms of the input data which is here divided into streams like batches in batch processing.[/vc_column_text][/vc_tta_section][vc_tta_section title=”What is GraphX?” tab_id=”1584462616248-6990a41f-f06c”][vc_column_text]Spark uses GraphX for graph processing to build and transform interactive graphs. The GraphX component enables programmers to reason about structured data at scale.[/vc_column_text][/vc_tta_section][vc_tta_section title=” What does MLlib do?” tab_id=”1584462616937-989c699c-4ad9″][vc_column_text]MLlib is a scalable Machine Learning library provided by Spark. It aims at making Machine Learning easy and scalable with common learning algorithms and use cases like clustering, regression filtering, dimensional reduction, and the like.[/vc_column_text][/vc_tta_section][vc_tta_section title=”What is Spark SQL?” tab_id=”1584462617777-51e71cd5-63d4″][vc_column_text]Spark SQL, better known as Shark, is a novel module introduced in Spark to perform structured data processing. Through this module, Spark executes relational SQL queries on data. The core of this component supports an altogether different RDD called SchemaRDD, composed of row objects and schema objects defining the data type of each column in a row. It is similar to a table in relational databases.[/vc_column_text][/vc_tta_section][vc_tta_section title=”What is a Parquet file?” tab_id=”1584462618513-8784deaf-e287″][vc_column_text]Parquet is a columnar format file supported by many other data processing systems. Spark SQL performs both read and write operations with the Parquet file and considers it be one of the best Big Data Analytics formats so far.[/vc_column_text][/vc_tta_section][vc_tta_section title=”What file systems does Apache Spark support?” tab_id=”1584462619353-2fb67a85-25ae”][vc_column_text]

  • Hadoop Distributed File System (HDFS)
  • Local file system
  • Amazon S3

[/vc_column_text][/vc_tta_section][/vc_tta_accordion][/vc_column][/vc_row]

WhatsApp us