BigData Analytics

(Batches Start from 29th September 2022)

(2 Demo Sessions Available)

About The Program:

With the belief to build a healthy ecosystem as per the Industry Standards REGex Software brings a Training/Internship Program on “BigData”. We organize Training/Internship Program for improving the knowledge and skills of the Students/Professionals, so that they can become expert in the field of BigData and get their Dream Job in Software Development Field in Big MNCs.

REGex Software Services’s BigData program is a valuable resource for beginners and experts. This program will introduce you to Hadoop, HDFS, HIVE, Apache Spark Amazon EMR etc. from Basics to Advance. If you want to become BigData Analyst, REGex introduce this program for you.


09:00 PM – 11:00 PM (IST)
(Mon – Fri)


Google Meet


6 – 8 Weeks


50 per Batch

What People Tell About Us

What you will Learn

  • Linux basics 
  • Big Data Analytics & Hadoop
  • HDFS [ Hadoop Distributed File System ]
  • Map-Reduce [ Data Processing ]
  • HIVE
  • Apache Spark on Azure DataBricks
  • Neo4j Graph Analytics & NoSQL DataBase
  • Amazon EMR
  • Learn how to use these tools in the field of Data Analytics

Study Material

  • E-Notes
  • Assignments per day
  • Poll test per day
  • 60+ hours on demand Live Video Lectures
  • Access of Lecture Videos & Notes
  • 24*7 Mentorship Support
  • Working on Live Projects 


  • Help you in Data Analytics Domain
  • Able to think out of the box
  • Expertise in different Big Data Tools like HDFS, Hive, Apache Spark, Amazon EMR
  • Able to solve many Interview Questions of Top MNCs
  • Able to get package of Data Analyst in Big MNCs upto 30 LPA

Live Sessions

Live Sessions by Expertise Trainers and Access of Recorded Session is also available

Live Projects

Get a chance to work on Industry Oriented Projects to implement your learning

24*7 Support

24*7 Mentorship Support available for all Students to clear all of your doubts


REGex provides Internship / Job opportunities to the best Students in different Companies.

Our Students Placed In


Course Content

  • Basics of Python 
  • OOPs Concepts
  • File & Exception Handling
  • Working with Pandas, Numpy & Matplotlib
    ■ Working with Missing Data
    ■ Data Grouping
    ■ Data Subsetting
    ■ Merging & Joining Data Frames
  • Importing Libraries & Datasets

● Introduction to LINUX Operating System and Basic LINUX commands
● Operating System
● Basic LINUX Commands

● LINUX File System
● File Types
● File Permissions
● File Related Commands
● Filters
o Simple Filters
o Advanced Filters

● Vi Editor
● Input Mode Commands
● Vi Editor – Save & Quit
● Cursor Movement Commands

● Shell Variables
● Environmental Variables
● Shell script Commands
● Arithmetic Operations
● Command Substitution
● Command Line Arguments

● Business Intelligence
● Need for Business Intelligence
● Terms used in BI
● Components of BI

● Data Warehouse
● History of Data Warehousing
● Need for Data Warehouse
● Data Warehouse Architecture
● Data Mining Works with DWH
● Features of Data warehouse
● Data Mart
● Application Areas

● Dimension modeling
● Fact and Dimension tables
● Database schema
● Schema Design for Modeling
● Star, SnowFlake
● Fact Constellation schema
● Use of Data mining
● Data mining and Business Intelligence
● Types of data used in Data mining
● Data mining applications
● Data mining products

● What’s Big Data?
● Big Data: 3V’s
● Explosion of Data
● What’s driving Big Data
● Applications for Big Data Analytics
● Big Data Use Cases
● Benefits of Big Data

● History of Hadoop
● Distributed File System
● What is Hadoop
● Characteristics of Hadoop
● RDBMS Vs Hadoop
● Hadoop Generations
● Components of Hadoop
● HDFS Blocks and Replication
● How Files Are Stored
● HDFS Commands
● Hadoop Daemons

● Difference between Hadoop 1.0 and 2.0
● New Components in Hadoop 2.x
● Configuration Files in Hadoop 2.x
● Major Hadoop Distributors/Vendors
● Cluster Management & Monitoring
● Hadoop Downloads

● What is distributed computing
● Introduction to Map Reduce
● Map Reduce components
● How MapReduce works
● Word Count execution
● Suitable & unsuitable use cases for MapReduce

● Architecture
● Basic Syntax
● Import data from a table in a relational database into HDFS
● import the results of a query from a relational database into HDFS
● Import a table from a relational database into a new or existing Hive table
● Insert or update data from HDFS into a table in a relational database

● Define a Hive-managed table
● Define a Hive external table
● Define a partitioned Hive table
● Define a bucketed Hive table
● Define a Hive table from a select query
● Define a Hive table that uses the ORCFile format
● Create a new ORCFile table from the data in an existing non-ORCFile Hive table
● Specify the delimiter of a Hive table
● Load data into a Hive table from a local directory
● Load data into a Hive table from an HDFS directory
● Load data into a Hive table as the result of a query
● Load a compressed data file into a Hive table
● Update a row in a Hive table
● Delete a row from a Hive table
● Insert a new row into a Hive table
● Join two Hive tables
● Use a subquery within a Hive query

● An overview of functional programming
● Why Scala?
● Working with functions
● objects and inheritance
● Working with lists and collections
● Abstract classes

● What is Spark?
● History of Spark
● Spark Architecture
● Spark Shell

● RDD Basics
● Creating RDDs in Spark
● RDD Operations
● Passing Functions to Spark
● Transformations and Actions in Spark
● Spark RDD Persistence

● Pair RDDs
● Transformations on Pair RDDs
● Actions Available on Pair RDDs
● Data Partitioning (Advanced)
● Loading and Saving the Data

● Accumulators
● Broadcast Variables
● Piping to External Programs
● Numeric RDD Operations
● Spark Runtime Architecture
● Deploying Applications

● Spark SQL Overview
● Spark SQL Architecture

● What are dataframe
● Manipulating Dataframes
● Reading new data from different file format
● Group By & Aggregations functions

● What is Spark streaming?
● Spark Streaming example

● Introduction of HBase
● Comparison with traditional database
● HBase Data Model (Logical and Physical models)
● Hbase Architecture
● Regions and Region Servers
● Partitions
● Compaction (Major and Minor)
● Shell Commands
● HBase using APIs

● Pre-requisites
● Introduction
● Architecture

● Installation and Configuration
● Repository
● Projects
● Metadata Connection
● Context Parameters
● Jobs / Joblets
● Components
● Important components
● Aggregation & working with Input & output data

● Pseudo Live Project (PLP) program is primarily to handhold participants who are fresh into the technology. In PLP, more importance given to “Process Adherence”
● The following SDLC activities are carried out during PLP
o Requirement Analysis
o Design ( High Level Design and Low Level Design)
o Design of UTP(Unit Test Plan) with test cases
o Coding
o Code Review
o Testing
o Deployment
o Configuration Management
o Final Presentation

Extra Sessions

Additinal Session on GIT, Linux, Docker, AWS Basics, Jenkins and many more for all students.

Fee Structure

Indian Fee

Price: ₹39,999/- (Flat 75% off) => ₹9999/- 

International Fee

Price: $1000 (Flat 75% off) => $250 

Fee can be paid in 2 installments of 6k + 5k

Cashback Policy

  • You will get your Unique Referral Code after successful paid registration.
  • You will get ₹1000 Cashback directly in your account for each paid registration from your Unique Referral Code on 15th October, 2022(After Closing Registrations of this program) .
  • For Example:- If we received 10 paid registration from your Unique Referral Code then you will receive ₹1000*10 = ₹10000 on 15th October, 2022.
For Frequent Course Updates and Information, Join our Telegram Group
For Webinar Videos and Demo Session, Join our Youtube Channel
Join Other Summer Internship/Training Program – September 2022

Enroll Now

(Batches Start from 29th September 2022)

*It will help us to reach more
*Seats can be filled or Price can be increased at any time. Refund policy is not available*