Open Source Generative AI

AI Training for IT Engineers | Open Source Generative AI

Open Source Generative AI

Vendor:  AI

Upcoming classes: See Class Calendar

Class Overview

Learn how to write practical AI applications via hands-on labs. You will design and develop Transformer models, ensuring data security in your work.

This 4-day course covers AI transformer architectures, Python programming, hardware requirements, training techniques, and AI tasks such as classification and regression. It includes hands-on exercises with open-source LLM frameworks, along with advanced topics such as fine-tuning and quantization.

Ideal for Python Developers, DevSecOps Engineers, Project Managers, Data Acquisition Specialists, Architects, and Directors.

This course requires basic Python skills and provides access to a GPU-accelerated server for practical experience.

Class Details

Objectives


After taking this training, the student should be able to:
- Train and optimize Transformer models with PyTorch.
- Explain advanced prompt engineering.
- Identify AI architecture, especially Transformers.
- Write a real-world AI web application.
- Describe tokenization and word embeddings.
- Install and use frameworks like Llama-2.
- Apply strategies to maximize model performance.
- Describe model quantization and fine-tuning.
- Compare CPU vs. GPU hardware acceleration.
- Identify chat vs. instruct interaction modes.

Prerequisite Knowledge Advisory

Before taking this class, you should have:
- Working knowledge of Python (ex. PCEP Certification or equivalent experience)
- Familiarity with Linux

Required Exam for Gen AI Certification


Exam: (contact us)

Open Source Generative AI Training Class Outline


Module 1: Deep Learning Introduction
- Lecture: What is Intelligence?
- Lecture: Generative AI Unveiled
- Lecture: The Transformer Model
- Lecture: Feed Forward Neural Networks
- Lecture + Lab: Tokenization
- Lecture + Lab: Word Embeddings
- Lecture + Lab: Positional Encoding

Module 2: Building a Transformer Model from Scratch
- Lecture: PyTorch
- Lecture + Lab: Construct a Tensor from a Dataset
- Lecture + Lab: Orchestrate Tensors in Blocks and Batches
- Lecture + Lab: Initialize PyTorch Generator Function
- Lecture + Lab: Train the Transformer Model
- Lecture + Lab: Apply Positional Encoding and Self-Attention
- Lecture + Lab: Attach the Feed Forward Neural Network
- Lecture + Lab: Build the Decoder Block
- Lecture + Lab: Transformer Model as Code

Module 3: Prompt Engineering
- Lecture: Introduction to Prompt Engineering
- Lecture + Lab: Getting Started with Gemini
- Lecture + Lab: Developing Basic Prompts
- Lecture + Lab: Intermediate Prompts: Define Task/Inputs/Outputs/Constraints/Style
- Lecture + Lab: Advanced Prompts: Chaining, Set Role, Feedback, Examples

Module 4: Hardware Requirements
- Lecture: GPUs Role in AI Performance (CPU vs GPU)
- Lecture: Current GPUs and Cost vs Value
- Lecture: Tensor Core vs Older GPU Architectures

Module 5: Pre-trained Large Language Model (LLM)
- Lecture: A History of Neural Network Architectures
- Lecture: Introduction to the llama.cpp Interface
- Lecture: Preparing A100 for Server Operations
- Lecture + Lab: Operate LLaMA2 Models with llama.cpp
- Lecture + Lab: Selecting Quantization Level to Meet Performance and Perplexity Requirements
- Lecture: Running the llama.cpp Package
- Lecture + Lab: LLaMA Interactive Mode
- Lecture + Lab: Persistent Context with LLaMA
- Lecture + Lab: Constraining Output with Grammars
- Lecture + Lab: Deploy LLaMA API Server
- Lecture + Lab: Develop LLaMA Client Application
- Lecture + Lab: Write a Real-World AI Application Using the LLaMA API

Module 6: Fine Tuning
- Lecture + Lab: Using PyTorch to Fine Tune Models
- Lecture + Lab: Advanced Prompt Engineering Techniques

Module 7: Testing and Pushing Limits
- Lecture + Lab: Maximizing Model Limits

Share by: