Online or onsite, instructor-led live GPU (Graphics Processing Unit) training courses demonstrate through interactive discussion and hands-on practice the fundamentals of GPU and how to program GPUs.
GPU training is available as "online live training" or "onsite live training". Online live training (aka "remote live training") is carried out by way of an interactive, remote desktop. Onsite live GPU trainings in Lyon can be carried out locally on customer premises or in NobleProg corporate training centers.
NobleProg -- Your Local Training Provider
Lyon, Swisslife Tower
NobleProg Lyon, 10 Place Charles Béraudier, Lyon, france, 69000
Located 200 meters far from the train station TGV, Swisslife Tower is today the most representative building of this quarter of Lyon. The Business Center offers you a perfect location for your training.
Gares TGV
100meters from Gare TGV Part-Dieu , porte du Rhône Exit
Aéroport
30 minutes from Lyon Saint Exupéry (Satolas)
Rhône Express from Saint Exupéry airport (Terminus Gare part-Dieu)
Huawei Ascend is a family of AI processors designed for high-performance inference and training.
This instructor-led, live training (online or onsite) is aimed at intermediate-level AI engineers and data scientists who wish to develop and optimize neural network models using Huawei’s Ascend platform and the CANN toolkit.
By the end of this training, participants will be able to:
Set up and configure the CANN development environment.
Develop AI applications using MindSpore and CloudMatrix workflows.
Optimize performance on Ascend NPUs using custom operators and tiling.
Deploy models to edge or cloud environments.
Format of the Course also allows for the evaluation of participants.
Interactive lecture and discussion.
Hands-on use of Huawei Ascend and CANN toolkit in sample applications.
Guided exercises focused on model building, training, and deployment.
Course Customization Options
To request a customized training for this course based on your infrastructure or datasets, please contact us to arrange.
Huawei’s AI stack — from the low-level CANN SDK to the high-level MindSpore framework — offers a tightly integrated AI development and deployment environment optimized for Ascend hardware.
This instructor-led, live training (online or onsite) is aimed at beginner-level to intermediate-level technical professionals who wish to understand how the CANN and MindSpore components work together to support AI lifecycle management and infrastructure decisions.
By the end of this training, participants will be able to:
Understand the layered architecture of Huawei’s AI compute stack.
Identify how CANN supports model optimization and hardware-level deployment.
Evaluate the MindSpore framework and toolchain in relation to industry alternatives.
Position Huawei's AI stack within enterprise or cloud/on-prem environments.
Format of the Course also allows for the evaluation of participants.
Interactive lecture and discussion.
Live system demos and case-based walkthroughs.
Optional guided labs on model flow from MindSpore to CANN.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
This instructor-led, live training in Lyon (online or onsite) is aimed at beginner-level to intermediate-level developers who wish to use OpenACC to program heterogeneous devices and exploit their parallelism.
By the end of this training, participants will be able to:
Set up an OpenACC development environment.
Write and run a basic OpenACC program.
Annotate code with OpenACC directives and clauses.
The CANN SDK (Compute Architecture for Neural Networks) provides powerful deployment and optimization tools for real-time AI applications in computer vision and NLP, especially on Huawei Ascend hardware.
This instructor-led, live training (online or onsite) is aimed at intermediate-level AI practitioners who wish to build, deploy, and optimize vision and language models using the CANN SDK for production use cases.
By the end of this training, participants will be able to:
Deploy and optimize CV and NLP models using CANN and AscendCL.
Use CANN tools to convert models and integrate them into live pipelines.
Optimize inference performance for tasks like detection, classification, and sentiment analysis.
Build real-time CV/NLP pipelines for edge or cloud-based deployment scenarios.
Format of the Course also allows for the evaluation of participants.
Interactive lecture and demonstration.
Hands-on lab with model deployment and performance profiling.
Live pipeline design using real CV and NLP use cases.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
This instructor-led, live training in Lyon (online or onsite) is aimed at beginner-level to intermediate-level developers who wish to learn the basics of GPU programming and the main frameworks and tools for developing GPU applications.
By the end of this training, participants will be able to: Understand the difference between CPU and GPU computing and the benefits and challenges of GPU programming.
Choose the right framework and tool for their GPU application.
Create a basic GPU program that performs vector addition using one or more of the frameworks and tools.
Use the respective APIs, languages, and libraries to query device information, allocate and deallocate device memory, copy data between host and device, launch kernels, and synchronize threads.
Use the respective memory spaces, such as global, local, constant, and private, to optimize data transfers and memory accesses.
Use the respective execution models, such as work-items, work-groups, threads, blocks, and grids, to control the parallelism.
Debug and test GPU programs using tools such as CodeXL, CUDA-GDB, CUDA-MEMCHECK, and NVIDIA Nsight.
Optimize GPU programs using techniques such as coalescing, caching, prefetching, and profiling.
CANN TIK (Tensor Instruction Kernel) and Apache TVM enable advanced optimization and customization of AI model operators for Huawei Ascend hardware.
This instructor-led, live training (online or onsite) is aimed at advanced-level system developers who wish to build, deploy, and tune custom operators for AI models using CANN’s TIK programming model and TVM compiler integration.
By the end of this training, participants will be able to:
Write and test custom AI operators using the TIK DSL for Ascend processors.
Integrate custom ops into the CANN runtime and execution graph.
Use TVM for operator scheduling, auto-tuning, and benchmarking.
Debug and optimize instruction-level performance for custom computation patterns.
Format of the Course also allows for the evaluation of participants.
Interactive lecture and demonstration.
Hands-on coding of operators using TIK and TVM pipelines.
Testing and tuning on Ascend hardware or simulators.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
This instructor-led, live training in Lyon (online or onsite) is aimed at beginner-level to intermediate-level developers who wish to use different frameworks for GPU programming and compare their features, performance, and compatibility.
By the end of this training, participants will be able to:
Set up a development environment that includes OpenCL SDK, CUDA Toolkit, ROCm Platform, a device that supports OpenCL, CUDA, or ROCm, and Visual Studio Code.
Create a basic GPU program that performs vector addition using OpenCL, CUDA, and ROCm, and compare the syntax, structure, and execution of each framework.
Use the respective APIs to query device information, allocate and deallocate device memory, copy data between host and device, launch kernels, and synchronize threads.
Use the respective languages to write kernels that execute on the device and manipulate data.
Use the respective built-in functions, variables, and libraries to perform common tasks and operations.
Use the respective memory spaces, such as global, local, constant, and private, to optimize data transfers and memory accesses.
Use the respective execution models to control the threads, blocks, and grids that define the parallelism.
Debug and test GPU programs using tools such as CodeXL, CUDA-GDB, CUDA-MEMCHECK, and NVIDIA Nsight.
Optimize GPU programs using techniques such as coalescing, caching, prefetching, and profiling.
CloudMatrix is Huawei’s unified AI development and deployment platform designed to support scalable, production-grade inference pipelines.
This instructor-led, live training (online or onsite) is aimed at beginner-level to intermediate-level AI professionals who wish to deploy and monitor AI models using the CloudMatrix platform with CANN and MindSpore integration.
By the end of this training, participants will be able to:
Use CloudMatrix for model packaging, deployment, and serving.
Convert and optimize models for Ascend chipsets.
Set up pipelines for real-time and batch inference tasks.
Monitor deployments and tune performance in production settings.
Format of the Course also allows for the evaluation of participants.
Interactive lecture and discussion.
Hands-on use of CloudMatrix with real deployment scenarios.
Guided exercises focused on conversion, optimization, and scaling.
Course Customization Options
To request a customized training for this course based on your AI infrastructure or cloud environment, please contact us to arrange.
Huawei's Ascend CANN toolkit enables powerful AI inference on edge devices such as the Ascend 310. CANN provides essential tools for compiling, optimizing, and deploying models where compute and memory are constrained.
This instructor-led, live training (online or onsite) is aimed at intermediate-level AI developers and integrators who wish to deploy and optimize models on Ascend edge devices using the CANN toolchain.
By the end of this training, participants will be able to:
Prepare and convert AI models for Ascend 310 using CANN tools.
Build lightweight inference pipelines using MindSpore Lite and AscendCL.
Optimize model performance for limited compute and memory environments.
Deploy and monitor AI applications in real-world edge use cases.
Format of the Course also allows for the evaluation of participants.
Interactive lecture and demonstration.
Hands-on lab work with edge-specific models and scenarios.
Live deployment examples on virtual or physical edge hardware.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
This instructor-led, live training in Lyon (online or onsite) is aimed at beginner-level to intermediate-level developers who wish to install and use ROCm on Windows to program AMD GPUs and exploit their parallelism.
By the end of this training, participants will be able to:
Set up a development environment that includes ROCm Platform, a AMD GPU, and Visual Studio Code on Windows.
Create a basic ROCm program that performs vector addition on the GPU and retrieves the results from the GPU memory.
Use ROCm API to query device information, allocate and deallocate device memory, copy data between host and device, launch kernels, and synchronize threads.
Use HIP language to write kernels that execute on the GPU and manipulate data.
Use HIP built-in functions, variables, and libraries to perform common tasks and operations.
Use ROCm and HIP memory spaces, such as global, shared, constant, and local, to optimize data transfers and memory accesses.
Use ROCm and HIP execution models to control the threads, blocks, and grids that define the parallelism.
Debug and test ROCm and HIP programs using tools such as ROCm Debugger and ROCm Profiler.
Optimize ROCm and HIP programs using techniques such as coalescing, caching, prefetching, and profiling.
This instructor-led, live training in Lyon (online or onsite) is aimed at beginner-level to intermediate-level developers who wish to use ROCm and HIP to program AMD GPUs and exploit their parallelism.
By the end of this training, participants will be able to:
Set up a development environment that includes ROCm Platform, a AMD GPU, and Visual Studio Code.
Create a basic ROCm program that performs vector addition on the GPU and retrieves the results from the GPU memory.
Use ROCm API to query device information, allocate and deallocate device memory, copy data between host and device, launch kernels, and synchronize threads.
Use HIP language to write kernels that execute on the GPU and manipulate data.
Use HIP built-in functions, variables, and libraries to perform common tasks and operations.
Use ROCm and HIP memory spaces, such as global, shared, constant, and local, to optimize data transfers and memory accesses.
Use ROCm and HIP execution models to control the threads, blocks, and grids that define the parallelism.
Debug and test ROCm and HIP programs using tools such as ROCm Debugger and ROCm Profiler.
Optimize ROCm and HIP programs using techniques such as coalescing, caching, prefetching, and profiling.
CANN (Compute Architecture for Neural Networks) is Huawei’s AI computing toolkit used to compile, optimize, and deploy AI models on Ascend AI processors.
This instructor-led, live training (online or onsite) is aimed at beginner-level AI developers who wish to understand how CANN fits into the model lifecycle from training to deployment, and how it works with frameworks like MindSpore, TensorFlow, and PyTorch.
By the end of this training, participants will be able to:
Understand the purpose and architecture of the CANN toolkit.
Set up a development environment with CANN and MindSpore.
Convert and deploy a simple AI model to Ascend hardware.
Gain foundational knowledge for future CANN optimization or integration projects.
Format of the Course also allows for the evaluation of participants.
Interactive lecture and discussion.
Hands-on labs with simple model deployment.
Step-by-step walkthrough of the CANN toolchain and integration points.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
Ascend, Biren, and Cambricon are leading AI hardware platforms in China, each offering unique acceleration and profiling tools for production-scale AI workloads.
This instructor-led, live training (online or onsite) is aimed at advanced-level AI infrastructure and performance engineers who wish to optimize model inference and training workflows across multiple Chinese AI chip platforms.
By the end of this training, participants will be able to:
Benchmark models on Ascend, Biren, and Cambricon platforms.
Identify system bottlenecks and memory/compute inefficiencies.
Apply graph-level, kernel-level, and operator-level optimizations.
Tune deployment pipelines to improve throughput and latency.
Format of the Course also allows for the evaluation of participants.
Interactive lecture and discussion.
Hands-on use of profiling and optimization tools on each platform.
Guided exercises focused on practical tuning scenarios.
Course Customization Options
To request a customized training for this course based on your performance environment or model type, please contact us to arrange.
CANN SDK (Compute Architecture for Neural Networks) is Huawei’s AI compute foundation that allows developers to fine-tune and optimize the performance of deployed neural networks on Ascend AI processors.
This instructor-led, live training (online or onsite) is aimed at advanced-level AI developers and system engineers who wish to optimize inference performance using CANN’s advanced toolset, including the Graph Engine, TIK, and custom operator development.
By the end of this training, participants will be able to:
Understand CANN's runtime architecture and performance lifecycle.
Use profiling tools and Graph Engine for performance analysis and optimization.
Create and optimize custom operators using TIK and TVM.
Resolve memory bottlenecks and improve model throughput.
Format of the Course also allows for the evaluation of participants.
Interactive lecture and discussion.
Hands-on labs with real-time profiling and operator tuning.
Optimization exercises using edge-case deployment examples.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
Chinese GPU architectures such as Huawei Ascend, Biren, and Cambricon MLUs offer CUDA alternatives tailored for local AI and HPC markets.
This instructor-led, live training (online or onsite) is aimed at advanced-level GPU programmers and infrastructure specialists who wish to migrate and optimize existing CUDA applications for deployment on Chinese hardware platforms.
By the end of this training, participants will be able to:
Evaluate compatibility of existing CUDA workloads with Chinese chip alternatives.
Port CUDA codebases to Huawei CANN, Biren SDK, and Cambricon BANGPy environments.
Compare performance and identify optimization points across platforms.
Address practical challenges in cross-architecture support and deployment.
Format of the Course also allows for the evaluation of participants.
Interactive lecture and discussion.
Hands-on code translation and performance comparison labs.
Guided exercises focused on multi-GPU adaptation strategies.
Course Customization Options
To request a customized training for this course based on your platform or CUDA project, please contact us to arrange.
This instructor-led, live training in Lyon (online or onsite) is aimed at beginner-level to intermediate-level developers who wish to use CUDA to program NVIDIA GPUs and exploit their parallelism.
By the end of this training, participants will be able to:
Set up a development environment that includes CUDA Toolkit, a NVIDIA GPU, and Visual Studio Code.
Create a basic CUDA program that performs vector addition on the GPU and retrieves the results from the GPU memory.
Use CUDA API to query device information, allocate and deallocate device memory, copy data between host and device, launch kernels, and synchronize threads.
Use CUDA C/C++ language to write kernels that execute on the GPU and manipulate data.
Use CUDA built-in functions, variables, and libraries to perform common tasks and operations.
Use CUDA memory spaces, such as global, shared, constant, and local, to optimize data transfers and memory accesses.
Use CUDA execution model to control the threads, blocks, and grids that define the parallelism.
Debug and test CUDA programs using tools such as CUDA-GDB, CUDA-MEMCHECK, and NVIDIA Nsight.
Optimize CUDA programs using techniques such as coalescing, caching, prefetching, and profiling.
CANN (Compute Architecture for Neural Networks) is Huawei’s AI compute stack for deploying and optimizing AI models on Ascend AI processors.
This instructor-led, live training (online or onsite) is aimed at intermediate-level AI developers and engineers who wish to deploy trained AI models efficiently to Huawei Ascend hardware using the CANN toolkit and tools such as MindSpore, TensorFlow, or PyTorch.
By the end of this training, participants will be able to:
Understand the CANN architecture and its role in the AI deployment pipeline.
Convert and adapt models from popular frameworks to Ascend-compatible formats.
Use tools like ATC, OM model conversion, and MindSpore for edge and cloud inference.
Diagnose deployment issues and optimize performance on Ascend hardware.
Format of the Course also allows for the evaluation of participants.
Interactive lecture and demonstration.
Hands-on lab work using CANN tools and Ascend simulators or devices.
Practical deployment scenarios based on real-world AI models.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
Biren AI Accelerators are high-performance GPUs designed for AI and HPC workloads with support for large-scale training and inference.
This instructor-led, live training (online or onsite) is aimed at intermediate-level to advanced-level developers who wish to program and optimize applications using Biren’s proprietary GPU stack, with practical comparisons to CUDA-based environments.
By the end of this training, participants will be able to:
Understand Biren GPU architecture and memory hierarchy.
Set up the development environment and use Biren’s programming model.
Translate and optimize CUDA-style code for Biren platforms.
Apply performance tuning and debugging techniques.
Format of the Course also allows for the evaluation of participants.
Interactive lecture and discussion.
Hands-on use of Biren SDK in sample GPU workloads.
Guided exercises focused on porting and performance tuning.
Course Customization Options
To request a customized training for this course based on your application stack or integration needs, please contact us to arrange.
Cambricon MLUs (Machine Learning Units) are specialized AI chips optimized for inference and training in edge and datacenter scenarios.
This instructor-led, live training (online or onsite) is aimed at intermediate-level developers who wish to build and deploy AI models using the BANGPy framework and Neuware SDK on Cambricon MLU hardware.
By the end of this training, participants will be able to:
Set up and configure the BANGPy and Neuware development environments.
Develop and optimize Python- and C++-based models for Cambricon MLUs.
Deploy models to edge and data center devices running Neuware runtime.
Integrate ML workflows with MLU-specific acceleration features.
Format of the Course also allows for the evaluation of participants.
Interactive lecture and discussion.
Hands-on use of BANGPy and Neuware for development and deployment.
Guided exercises focused on optimization, integration, and testing.
Course Customization Options
To request a customized training for this course based on your Cambricon device model or use case, please contact us to arrange.
This instructor-led, live training in Lyon (online or onsite) is aimed at beginner-level system administrators and IT professionals who wish to install, configure, manage, and troubleshoot CUDA environments.
By the end of this training, participants will be able to:
Understand the architecture, components, and capabilities of CUDA.
This instructor-led, live training in Lyon (online or onsite) is aimed at beginner-level to intermediate-level developers who wish to use OpenCL to program heterogeneous devices and exploit their parallelism.
By the end of this training, participants will be able to:
Set up a development environment that includes OpenCL SDK, a device that supports OpenCL, and Visual Studio Code.
Create a basic OpenCL program that performs vector addition on the device and retrieves the results from the device memory.
Use OpenCL API to query device information, create contexts, command queues, buffers, kernels, and events.
Use OpenCL C language to write kernels that execute on the device and manipulate data.
Use OpenCL built-in functions, extensions, and libraries to perform common tasks and operations.
Use OpenCL host and device memory models to optimize data transfers and memory accesses.
Use OpenCL execution model to control the work-items, work-groups, and ND-ranges.
Debug and test OpenCL programs using tools such as CodeXL, Intel VTune, and NVIDIA Nsight.
Optimize OpenCL programs using techniques such as vectorization, loop unrolling, local memory, and profiling.
This instructor-led, live training in Lyon (online or onsite) is aimed at intermediate-level developers who wish to use CUDA to build Python applications that run in parallel on NVIDIA GPUs.
By the end of this training, participants will be able to:
Use the Numba compiler to accelerate Python applications running on NVIDIA GPUs.
Create, compile and launch custom CUDA kernels.
Manage GPU memory.
Convert a CPU based application into a GPU-accelerated application.
This instructor-led, live training course in Lyon covers how to program GPUs for parallel computing, how to use various platforms, how to work with the CUDA platform and its features, and how to perform various optimization techniques using CUDA. Some of the applications include deep learning, analytics, image processing and engineering applications.
Read more...
Last Updated:
Testimonials (2)
Very interactive with various examples, with a good progression in complexity between the start and the end of the training.
Jenny - Andheo
Course - GPU Programming with CUDA and Python
Trainers energy and humor.
Tadeusz Kaluba - Nokia Solutions and Networks Sp. z o.o.
Online Graphics Processing Unit (GPU) training in Lyon, GPU (Graphics Processing Unit) training courses in Lyon, Weekend GPU courses in Lyon, Evening Graphics Processing Unit training in Lyon, Graphics Processing Unit instructor-led in Lyon, GPU private courses in Lyon, GPU boot camp in Lyon, Weekend GPU training in Lyon, Online GPU training in Lyon, GPU instructor-led in Lyon, Graphics Processing Unit one on one training in Lyon, Evening GPU (Graphics Processing Unit) courses in Lyon, Graphics Processing Unit (GPU) coaching in Lyon, GPU (Graphics Processing Unit) instructor in Lyon, GPU on-site in Lyon, Graphics Processing Unit (GPU) classes in Lyon, GPU trainer in Lyon