Data Streaming and Real Time Data Processing Training Course
Course Overview
This course offers a practical and structured introduction to building real-time data streaming systems. It covers core concepts, architecture patterns, and industry-standard tools used to process continuous data at scale. Participants will learn how to design, implement, and optimize streaming pipelines using modern frameworks. The curriculum progresses from foundational ideas to hands-on applications, enabling learners to confidently build production-ready real-time solutions.
Training Format
• Instructor-led sessions with guided explanations
• Concept walkthroughs with real-world examples
• Hands-on demonstrations and coding exercises
• Progressive labs aligned with daily topics
• Interactive discussions and Q&A
Course Objectives
• Understand real-time data streaming concepts and system architecture
• Differentiate between batch and streaming data processing models
• Design scalable and fault-tolerant streaming pipelines
• Work with distributed streaming tools and frameworks
• Apply event time processing, windowing, and stateful operations
• Build and optimize real-time data solutions for business use cases
This course is available as onsite live training in France or online live training.Course Outline
Course Outline: Day 1
• Introduction to data streaming concepts
• Batch vs. real-time processing fundamentals
• Event-driven architecture basics
• Common use cases in industry
• Overview of the streaming ecosystem
Day 2
• Streaming architecture design patterns
• Fundamentals of distributed messaging systems
• Producers and consumers
• Topics, partitions, and data flow
• Data ingestion strategies
Day 3
• Stream processing concepts and frameworks
• Event time vs. processing time
• Windowing techniques and use cases
• Stateful stream processing
• Fault tolerance and checkpointing basics
Day 4
• Data transformation in streaming pipelines
• ETL and ELT in real-time systems
• Schema management and evolution
• Stream joins and enrichment
• Introduction to cloud-based streaming services
Day 5
• Monitoring and observability in streaming systems
• Security and access control basics
• Performance tuning and optimization
• End-to-end pipeline design review
• Real-world use cases such as fraud detection and IoT processing
Open Training Courses require 5+ participants.
Data Streaming and Real Time Data Processing Training Course - Booking
Data Streaming and Real Time Data Processing Training Course - Enquiry
NobleProg offers professional training programs designed specifically for companies and organizations. These trainings are not intended for individuals.
Data Streaming and Real Time Data Processing - Consultancy Enquiry
Testimonials (1)
Hands on exercises. Class should have been 5 days, but the 3 days helped to clear up a lot of questions that I had from working with NiFi already
James - BHG Financial
Course - Apache NiFi for Administrators
Upcoming Courses
Related Courses
Administrator Training for Apache Hadoop
35 HoursTarget Audience:
This course is designed for IT professionals seeking robust solutions for storing and processing large-scale datasets within a distributed system environment.
Objective:
To provide in-depth knowledge and expertise in Hadoop cluster administration.
Big Data Analytics with Google Colab and Apache Spark
14 HoursThis instructor-led, live training in France (online or onsite) is aimed at intermediate-level data scientists and engineers who wish to use Google Colab and Apache Spark for big data processing and analytics.
By the end of this training, participants will be able to:
- Set up a big data environment using Google Colab and Spark.
- Process and analyze large datasets efficiently with Apache Spark.
- Visualize big data in a collaborative environment.
- Integrate Apache Spark with cloud-based tools.
Big Data Analytics in Health
21 HoursBig data analytics refers to the systematic examination of vast and diverse datasets to uncover correlations, hidden patterns, and other valuable insights.
The healthcare sector generates enormous volumes of complex, heterogeneous medical and clinical data. Leveraging big data analytics on this information holds significant potential for deriving insights that enhance healthcare delivery. However, the sheer scale of these datasets presents substantial challenges for analysis and practical implementation within clinical environments.
In this instructor-led, live remote training, participants will learn how to conduct big data analytics in the health sector by working through a series of hands-on, live laboratory exercises.
By the end of this training, participants will be able to:
- Install and configure big data analytics tools such as Hadoop MapReduce and Spark
- Understand the characteristics of medical data
- Apply big data techniques to manage medical data
- Study big data systems and algorithms in the context of health applications
Audience
- Developers
- Data Scientists
Format of the Course also allows for the evaluation of participants.
- A mix of lectures, discussions, exercises, and intensive hands-on practice.
Note
- To request customized training for this course, please contact us to arrange.
Hadoop For Administrators
21 HoursApache Hadoop stands as the leading framework for processing Big Data across server clusters. In this comprehensive three-day (or optional four-day) course, participants will explore the business advantages and practical applications of Hadoop and its broader ecosystem. The curriculum covers cluster deployment planning, scalability strategies, as well as installation, maintenance, monitoring, troubleshooting, and optimization techniques. Attendees will engage in hands-on practice with bulk data loading, become acquainted with various Hadoop distributions, and learn to install and manage key Hadoop ecosystem tools. The course concludes with an in-depth discussion on securing clusters using Kerberos.
“…The materials were exceptionally well-prepared and thoroughly covered. The labs were very helpful and well-organized”
— Andrew Nguyen, Principal Integration DW Engineer, Microsoft Online Advertising
Audience
Professionals serving as Hadoop administrators.
Format
The course combines lectures with hands-on labs, maintaining an approximate balance of 60% lectures and 40% lab exercises.
Hadoop for Developers (4 days)
28 HoursApache Hadoop is the leading framework for processing Big Data across server clusters. This course introduces developers to the key components of the Hadoop ecosystem, including HDFS, MapReduce, Pig, Hive, and HBase.
Advanced Hadoop for Developers
21 HoursApache Hadoop stands as one of the leading frameworks for processing Big Data across server clusters. This course explores data management within HDFS, alongside advanced usage of Pig, Hive, and HBase. These sophisticated programming techniques are designed to benefit experienced Hadoop developers.
Audience: Developers
Duration: Three days
Format: Lectures (50%) and hands-on labs (50%).
Hadoop Administration on MapR
28 HoursTarget Audience:
This course aims to demystify big data and Hadoop technology, demonstrating that it is accessible and straightforward to understand.
Hadoop and Spark for Administrators
35 HoursThis instructor-led, live training in France (online or onsite) is designed for system administrators who wish to learn how to set up, deploy, and manage Hadoop clusters within their organization.
Upon completing this training, participants will be able to:
- Install and configure Apache Hadoop.
- Understand the four key components of the Hadoop ecosystem: HDFS, MapReduce, YARN, and Hadoop Common.
- Leverage the Hadoop Distributed File System (HDFS) to scale a cluster across hundreds or thousands of nodes.
- Configure HDFS to serve as the storage engine for on-premise Spark deployments.
- Configure Spark to access alternative storage solutions, including Amazon S3 and NoSQL databases such as Redis, Elasticsearch, Couchbase, Aerospike, and others.
- Perform essential administrative tasks, including provisioning, management, monitoring, and securing an Apache Hadoop cluster.
HBase for Developers
21 HoursThis course provides an introduction to HBase, a NoSQL database built on top of Hadoop. It is designed for developers who plan to build applications using HBase, as well as administrators responsible for managing HBase clusters.
We will guide developers through HBase’s architecture, data modeling techniques, and application development processes. The course also covers integrating MapReduce with HBase and addresses key administration topics related to performance optimization. With numerous hands-on lab exercises, this training offers a practical learning experience.
Duration : 3 days
Audience : Developers & Administrators
Apache NiFi for Administrators
21 HoursApache NiFi is an open-source platform designed for flow-based data integration and event processing. It facilitates automated, real-time data routing, transformation, and system mediation between disparate systems, featuring a web-based user interface and fine-grained control capabilities.
This instructor-led live training, available either onsite or remotely, is designed for intermediate-level administrators and engineers who aim to deploy, manage, secure, and optimize NiFi dataflows within production environments.
Upon completing this training, participants will be equipped to:
- Install, configure, and maintain Apache NiFi clusters.
- Design and manage dataflows originating from diverse sources and targets.
- Implement automation, routing, and transformation logic for data flows.
- Optimize performance, monitor operational health, and resolve issues.
Format of the Course also allows for the evaluation of participants.
- Interactive lectures combined with discussions on real-world architectures.
- Hands-on labs focused on building, deploying, and managing data flows.
- Scenario-based exercises conducted in a live laboratory environment.
Course Customization Options
- To request a customized training session for this course, please contact us to arrange it.
Apache NiFi for Developers
7 HoursIn this instructor-led, live training in France, participants will learn the fundamentals of flow-based programming as they develop a number of demo extensions, components and processors using Apache NiFi.
By the end of this training, participants will be able to:
- Understand NiFi's architecture and dataflow concepts.
- Develop extensions using NiFi and third-party APIs.
- Custom develop their own Apache Nifi processor.
- Ingest and process real-time data from disparate and uncommon file formats and data sources.
PySpark and Machine Learning
21 HoursThis course offers a hands-on introduction to creating scalable data processing and Machine Learning workflows with PySpark. Attendees will discover how Apache Spark functions within contemporary Big Data ecosystems and how to effectively manage large datasets by applying distributed computing principles.
Python and Spark for Big Data (PySpark)
21 HoursIn this instructor-led, live training in France, participants will discover how to leverage Python and Spark in unison to analyze big data through hands-on exercises.
Upon completion of this training, participants will be able to:
- Understand how to use Spark with Python to analyze Big Data.
- Complete exercises that simulate real-world scenarios.
- Utilize various tools and techniques for big data analysis using PySpark.
Python, Spark, and Hadoop for Big Data
21 HoursThis instructor-led, live training in France (online or onsite) is aimed at developers who wish to use and integrate Spark, Hadoop, and Python to process, analyze, and transform large and complex data sets.
By the end of this training, participants will be able to:
- Set up the necessary environment to start processing big data with Spark, Hadoop, and Python.
- Understand the features, core components, and architecture of Spark and Hadoop.
- Learn how to integrate Spark, Hadoop, and Python for big data processing.
- Explore the tools in the Spark ecosystem (Spark MLlib, Spark Streaming, Kafka, Sqoop, Kafka, and Flume).
- Build collaborative filtering recommendation systems similar to Netflix, YouTube, Amazon, Spotify, and Google.
- Use Apache Mahout to scale machine learning algorithms.
Stratio: Rocket and Intelligence Modules with PySpark
14 HoursStratio is a data-centric platform that seamlessly integrates big data, AI, and governance into a single, unified solution. Its Rocket and Intelligence modules empower organizations with rapid data exploration, transformation, and advanced analytics capabilities tailored for enterprise environments.
This instructor-led live training, available both online and onsite, is designed for intermediate-level data professionals looking to master the Rocket and Intelligence modules within Stratio using PySpark. The curriculum focuses on leveraging looping structures, user-defined functions, and complex data logic to enhance workflow efficiency.
Upon completion of this training, participants will be equipped to:
- Navigate and effectively utilize the Stratio platform through its Rocket and Intelligence modules.
- Apply PySpark techniques for data ingestion, transformation, and analysis within the Stratio ecosystem.
- Implement loops and conditional logic to manage data workflows and streamline feature engineering tasks.
- Develop and manage user-defined functions (UDFs) to create reusable data operations in PySpark.
Course Format
- Engaging interactive lectures and discussions.
- Extensive exercises and practical practice sessions.
- Hands-on implementation in a live-lab environment.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.