loading...
Sale!

NoSQL, Big Data, and Spark Foundations

$5.00

Price: 5.00 USD | Size: 799 MB |   Duration : 9.04 Hours  | 93 Video Lessons

BRAND: Expert TRAINING | ENGLISH | INSTANT DOWNLOAD |

⭐️⭐️⭐️⭐️⭐️4.9

OR

Description

Price: 5.00 USD | Size: 799 MB |   Duration : 9.04 Hours  | 93 Video Lessons

BRAND: Expert TRAINING | ENGLISH | INSTANT DOWNLOAD |

⭐️⭐️⭐️⭐️⭐️4.9

 

IBM

NoSQL, Big Data, and Spark Foundations

Springboard your Big Data career. Master fundamentals of NoSQL, Big Data, and Apache Spark with hands-on job-ready skills in machine learning and data engineering.

What you’ll learn

  • Work with NoSQL databases to insert, update, delete, query, index, aggregate, and shard/partition data.

  • Develop hands-on NoSQL experience working with MongoDB, Apache Cassandra, and IBM Cloudant.

  • Develop foundational knowledge of Big Data and gain hands-on lab experience using Apache Hadoop, MapReduce,  Apache Spark, Spark SQL, and Kubernetes.

  • Perform Extract, Transform and Load (ETL) processing and Machine Learning model training and deployment with Apache Spark.

Skills you’ll gain

  • Cloud Database
  • Mongodb
  • Cassandra
  • NoSQL
  • Cloudant
  • Machine Learning
  • Machine Learning Pipelines
  • Data Engineer
  • SparkML
  • Apache Spark
  • Big Data
  • SparkSQL
  • Apache Hadoop

Big Data Engineers and professionals with NoSQL skills are highly sought after in the data management industry. This Specialization is designed for those seeking to develop fundamental skills for working with Big Data, Apache Spark, and NoSQL databases. Three information-packed courses cover popular NoSQL databases like MongoDB and Apache Cassandra,  the widely used Apache Hadoop ecosystem of Big Data tools, as well as Apache Spark analytics engine for large-scale data processing.

You start with an overview of various categories of NoSQL (Not only SQL) data repositories, and then work hands-on with several of them including IBM Cloudant, MonogoDB and Cassandra. You’ll perform various data management tasks, such as creating & replicating databases, inserting, updating, deleting, querying, indexing, aggregating & sharding data. Next, you’ll gain fundamental knowledge of Big Data technologies such as Hadoop, MapReduce, HDFS, Hive, and HBase, followed by a more in depth working knowledge of Apache  Spark, Spark Dataframes, Spark SQL, PySpark, the Spark Application UI, and scaling Spark with Kubernetes. In the final course, you will learn to work with Spark Structured Streaming  Spark ML – for performing Extract, Transform and Load processing (ETL) and machine learning tasks.

This specialization is suitable for beginners in the fields of NoSQL and Big Data – whether you are or preparing to be a Data Engineer, Software Developer, IT Architect, Data Scientist, or IT Manager.

Applied Learning Project

The emphasis in this specialization is on learning by doing. As such, each course includes hands-on labs to practice & apply the NoSQL and Big Data skills you learn during lectures.

In the first course, you will work hands-on with several NoSQL databases- MongoDB, Apache Cassandra, and IBM Cloudant to perform a variety of tasks: creating the database, adding documents, querying data, utilizing the HTTP API, performing Create, Read, Update & Delete (CRUD) operations, limiting & sorting records, indexing, aggregation, replication, using CQL shell, keyspace operations, & other table operations.

In the next course, you’ll launch a Hadoop cluster using Docker and run Map Reduce jobs. You’ll explore working with Spark using Jupyter notebooks on a Python kernel. You’ll build your Spark skills using DataFrames, Spark SQL, and scale your jobs using Kubernetes.

In the final course you will use Spark for ETL processing, and Machine Learning model training and deployment using IBM Watson.

 

Introduction to NoSQL Databases

What you’ll learn

  • Differentiate among the four main categories of NoSQL repositories.

  • Describe the characteristics, features, benefits, limitations, and applications of the more popular Big Data processing tools.

  • Perform common tasks using MongoDB tasks including create, read, update, and delete (CRUD) operations.

  • Execute keyspace, table, and CRUD operations in Cassandra.

Skills you’ll gain

Category: Cloud Database

Category: Mongodb

Category: Cassandra

Category: NoSQL

Category: Cloudant

Introduction to Big Data with Spark and Hadoop

What you’ll learn

  • Explain the impact of big data, including use cases, tools, and processing methods.

  • Describe Apache Hadoop architecture, ecosystem, practices, and user-related applications, including Hive, HDFS, HBase, Spark, and MapReduce.

  • Apply Spark programming basics, including parallel programming basics for DataFrames, data sets, and Spark SQL.

  • Use Spark’s RDDs and data sets, optimize Spark SQL using Catalyst and Tungsten, and use Spark’s development and runtime environment options.

Skills you’ll gain

Category: Big Data

Category: SparkSQL

Category: SparkML

Category: Apache Hadoop

Category: Apache Spark

Machine Learning with Apache Spark

What you’ll learn

  • Describe ML, explain its role in data engineering, summarize generative AI, discuss Spark’s uses, and analyze ML pipelines and model persistence.

  • Evaluate ML models, distinguish between regression, classification, and clustering models, and compare data engineering pipelines with ML pipelines.

  • Construct the data analysis processes using Spark SQL, and perform regression, classification, and clustering using SparkML.

  • Demonstrate connecting to Spark clusters, build ML pipelines, perform feature extraction and transformation, and model persistence.

Skills you’ll gain

Category: Machine Learning Pipelines

Category: Data Engineer

Category: SparkML

Category: Apache Spark

Reviews

There are no reviews yet.

Only logged in customers who have purchased this product may leave a review.

You may also like…

0
    0
    Your Cart
    Your cart is emptyReturn to Shop

    Add to cart