Welcome to Pace2.O IT Training Institute
Enquire Now!
Courses Images

Course Overview

  • Helps the students in understanding data, information, databases, and DBMS.
  • Understanding data modelling w.r.t. ER modelling and Relational Modelling.
  • Understanding the life cycle of the relational database.
    • Hadoop Fundamentals What is Bigdata?
    • Evolution of Bigdata Types of Data and their Significance Need for Bigdata Analytics Why Bigdata with Hadoop? History of Hadoop Why Hadoop is in demand in market nowadays? Limitations of SQL based Tools Hadoop Nodes Hadoop Rack Hadoop Cluster Architecture of Hadoop Characteristics of Namenode Workaround with Datanodes Significance of JobTracker and Tasktrackersv Hase co-ordination with JobTracker Secondary Namenode usage and Workaround Hadoop Releases and their Significance Introduction to Hadoop Release-1 Hadoop Daemons in Hadoop Release-1 Introduction to Hadoop Release-2 Hadoop Daemons in Hadoop Release-2 Hadoop Cluster Demo Hadoop 2.x Cluster Architecture A Typical Production Hadoop Cluster Hadoop Cluster Modes Hadoop 2.x Configuration Files Single node cluster and Multi node cluster setup Hadoop installation Introduction to Hadoop FS and Processing Environment’s UIs How to read and write files Basic Unix commands for Hadoop Hadoop FS shell Hadoop releases practical Hadoop daemons practical Common Hadoop Shell Commands An Overview of Hadoop Administration How Hadoop is getting two categories Projects New projects on Hadoop Hadoop Storage – HDFS (Hadoop Distributed file system) Hadoop Processing Framework (Map Reduce / YARN) Alternates of Map Reduce Why NOSQL is in much demand instead of SQL Distributed warehouse for HDFS YARN Architecture Significance of Scalability of Operation Use cases where not to use Hadoop Use cases where Hadoop Is used Facebook,Twitter,Snapdeal, Flipkart Hadoop Java API Hadoop Classes What is MapReduceBase? Mapper Class and its Methods What is Partitioner and types MapReduce Use Cases Traditional way VS MapReduce way Significance of MapReduce Hadoop 2. X MapReduce Architecture Hadoop 2. MapReduce Program Understanding Input Splits Relationship between Input Splits and HDFS Blocks MapReduce: Combiner & Partitioner Hadoop specific Data types Working on Unstructured Data Analytics What is an Iterator and its usage techniques Types of Mappers and Reducers What is Output collector and its Significance Workaround with Joining of datasets Complications with MapReduce Mapreduce Anatomy Anagram example,Teragen Example,Terasort Example WordCount Example Working with multiple mappers Working with weather data on multiple Data nodes in a Fully distributed Architecture Use Cases where Map Reduce anatomy fails Advanced Map Reduce Counters Distributed Cache MRunit Joins in Map Reduce Reduce Side Join Replicated Join Composite Join Cartesian Product Custom Input Format Sequence Input Format XML File Parsing using Map Reduce Interview questions based on JAVA MapReduce Working With Pig Latin – (Fundamentals) Introduction to Pig Latin History and Evolution of Pig Latin Why Pig is used only with Bigdata MapReduce VS Pig Pig Architecture and Overview of Compiler and Execution Engine Programming Structure in Pig Pig Running Modes Pig Components Pig Execution Pig Release and Significance of Bugfixes Pig Specific Datatypes Complex Datatypes Bags, Tuples, Fields Pig Specific Methods Comparison between Yahoo Pig & Facebook Hive Shell and Utility Commands Working with Grunt Shell Grunt commands: 17 in number Pig Latin: Relational Operators Pig Latin: File Loaders Pig Latin: Group Operator Cogroup Operator Joins and Cogroup Union Understanding Diagnostic Operators Specialized Joins in Pig Built in Functions Eval Function Load and Store Functions Math Function String Function Date Function Pig UDF Piggybank Parameter Substitution Pig Streaming Pig Use Cases: Aviation and Healthcare Pig Data Input Techniques for flatfiles Flatfiles: Comma separated, Tab delimited, and fixed width Working with Schemaless Approach How to attach Schema to a file/table in Pig Schema referencing for similar Tables and Files Working with Delimiters Working With Pig Latin – II (Advanced) Working with Binary Storage and Text Loader Bigdata Operations and Read write Analogy Filtering Datasets Filtering rows with specific condition Filtering rows with multiple conditions Filtering rows with String Based Conditions Sorting DataSets Sorting rows with Specific column or columns Multi level Sort Analogy of a Sort Operation Grouping Datasets and Co-grouping dataJoining DataSets Types of Joins supported by Pig Latin Aggregate Operations like average, sum, min, max, count Flatten Operator Creating a UDF (USER DEFINED FUNCTION) using java Calling UDF from a Pig Script Data validation Scripts Working With Hive Overview of Hive Background of Hive Hive VS Pig Installation and Configuration Interacting HDFS using HIVE Map Reduce Programs through HIVE Hive Architecture and Components Hive Commands Loading, Filtering, Grouping What is Meta Storage and Meta Store Derby Database HQL DDL, DML, and other Sub Languages of Hive Data types in Hive Partitions and Buckets Hive Tables: Managed and External Importing Data Querying Data Managing Outputs Hive Scipts Hive UDF Hive Operators Hive Joins, Unions, and Groups Sample Programs in Hive Alter and Delete in Hive Partition in Hive Indexing Industry Specific Configuration of Hive Parameters Authentication & Authorization Statistics with Hive Archiving in Hive Hands-on exercise Advanced Hive Understanding Hive Releases Hive and OLTP OLAP in Hive Hive QL: Joining Tables Dynamic Partitioning & Bucketing Serialization and Deserialization Custom Map/Reduce Scripts Hive Indexes and Views Hive Query Optimizers Hive Architecture Understanding Thrift Server User Defined Functions Hue Interface for Hive Analyzing Data with Hive Script Difference between Hive and Impala UDFs in Hive Complex Use cases in Hive Hadoop On Amazon Cloud Introduction to Cloud Infrastructure Amazon SaaS, Paas and IaaS Creating EC2 Instance for Processing Creating S3 Buckets Deploying Data on to the Cloud Choosing size of our instance Configuration of EMR Instance Creating a virtual cluster on Amazon Deploying project and getting stats HBase & Zookeeper Introduction to HBase HBase VS RDBMS HBase Components Hbase Architecture HBase Shell HBase Client API Data Loading Techniques Run Modes & Configuration HBase Cluster Deployment Regionservers and their implementation Client API’s and their features How messaging system works Columns and column families Configuring hbase-site.xml Available Client Loading Hbase with semi-structured data Internal data storage in hbase Timestamps Creating table with column families MapReduce Integration. HBase: Advanced Usage, Schema Design Load data from pig to hbase Zookeeper Data Model Zookeeper Service Challenges faced in Distributed Applications Coordination Znode Client API Functions Bulk Loading Receiving and Inserting Datav Filters in HBase Sqoop architecture Data Import and export in SQOOP Deploying quorum and configuration throughout the Cluster Sqoop An Overview of Sqoop Sqoop Real-life Connect Sqoop and its Uses Advantages of Sqoop Sqoop Processing Sqoop Execution Process Importing Data Using Sqoop Sqoop Import Process How to Import data to Hive and HBase? How to Export Data from Hadoop using Sqoop? Sqoop Alternative Sqoop Connector Flume Introduction to Flume Introduction to Apache Flume Flume Model Flume Goals Scalability in Flume Flume Data Integration Flume Installation on Single Node and Multinode Cluster Flume Architecture and various Components Data Sources: Types and Variants Data Target: Types and Variants Deploying an agent onto a single node cluster Problems associated with Flume Interview questions based on Flume Oozie And Hue Introduction to Apache Oozie Oozie: Components Oozie: Workflow Scheduling with Oozie Hands-on Training on Oozie Workflow Oozie Coordinator Oozie Commands Oozie Web Console Oozie for MapReduce Hive in Oozie An Overview of Hue Hue in Real-time Scenarios Use Cases in Hue Yarn Architecture Introduction to YARN and MR2 daemons Active and Standby Namenodes Resource Manager and Application Master Node Manager Container Objects and Container Namenode Federation Cloudera Manager and Impala Load balancing in cluster with namenode federation Architectural differences between Hadoop 1.0 and 2.0 Basics Of JAVA For Hadoop The Java Virtual Machine Variables Data types Constructs: Conditional and Looping Types: Wrapper classes Object-Oriented JAVA Fields and Methods Constructors Overloading methods Garbage collection Nested classes Overriding methods Polymorphism Making methods and classes final Abstract classes and methods Interfaces Threads Classes The I/O Package JAVA Security Spark Shell And PySpark What is Spark Shell? How to create a Spark Context? How to load a file in Shell? Operations on files in Spark Shell Understanding SBT How to build a Spark Project with SBT? How to run Spark Project with SBT? What is a Local Mode? Spark Mode Distributed Persistence Built-in Libraries for Spark The PySpark Shell The PySpark Shell – Advanced Spark Tools PySpark Integration with Jupyter Notebook Case Study: Analyzing Airlines Data with PySpark Working With Key/Value Pairs Creating Pair RDDs Transformations on Pair RDDs Aggregations, Grouping Data, Joins, Sorting Data Data Partitioning Determining an RDDs Partitioner Operations that Benefit from Partitioning Operations that Affect Partitioning Loading and Saving Data File Formats: JSON, Comma-Separated and Tab-Separated Values File Formats: Sequence Files, Object Files Hadoop Input and Output Formats Filesystems: Local/Regular FS Amazon S3 and HDFS Databases: Java Database Connectivity Cassandra, HBase, ElasticSearch RDD And Spark Defining RDDs Transformations in RDD Various Actions in RDD Lazy Evaluations Passing Functions to Spark: Python, Scala, Java How to load data in RDD? How to save data through RDD? Scala RDD Extensions 00:00 What are Double RDD Methods? RDD Methods Java Pair RDD Methods General RDD Methods Java RDD Methods Common Java RDD Methods Spark Java Function Classes Understanding Key-Value Pair RDD in Scala and Java MapReduce and RDD Spark and Hadoop Integration HDFS and Yarn

    COMPANY NAME:- Maveric  Labs Pvt Ltd Pune, Maharashtra
    KEY SKILL:- REST web services, Multi-Threading and Background Tasks, SQLite.  Eclipse, Android Studio IDEs ,  Android SDK & its components
    EXPERIENCE:-1 to 2 yrs
    SALARY:-ND

    -->