Mydra logo
Artificial Intelligence
DeepLearning.AI logo

DeepLearning.AI

Quality and Safety for LLM Applications

  • up to 1 hour
  • Beginner

This course addresses the critical safety and quality concerns in LLM applications. Learn to monitor and enhance security measures, detect and prevent threats, and explore real-world scenarios to safeguard your LLM applications.

  • Security measures
  • Hallucination detection
  • Jailbreak detection
  • Data leakage identification
  • Monitoring systems

Overview

In this course, you will explore new metrics and best practices to monitor your LLM systems and ensure safety and quality. You will learn to identify hallucinations, detect jailbreaks, and identify data leakage. Additionally, you will build your own monitoring system to evaluate app safety and security over time. By the end of the course, you will be able to identify common security concerns in LLM-based applications and customize your safety and security evaluation tools.

  • Web Streamline Icon: https://streamlinehq.com
    Online
    course location
  • Layers 1 Streamline Icon: https://streamlinehq.com
    English
    course language
  • Self-paced
    course format
  • Live classes
    delivered online

Who is this course for?

Python Developers

Anyone with basic Python knowledge interested in mitigating issues like hallucinations, prompt injections, and toxic outputs.

AI Enthusiasts

Individuals interested in learning about the safety and quality concerns in LLM applications.

Data Scientists

Professionals looking to enhance their skills in monitoring and securing LLM applications.

This course will help you monitor and enhance security measures for your LLM applications. Learn to detect and prevent critical security threats and explore real-world scenarios to prepare for potential risks. Ideal for Python developers, AI enthusiasts, and data scientists.

Pre-Requisites

1 / 1

  • Basic Python knowledge

What will you learn?

Introduction to LLM Safety and Quality
Overview of the importance of safety and quality in LLM applications.
Identifying Hallucinations
Methods like SelfCheckGPT to identify hallucinations in LLM responses.
Detecting Jailbreaks
Using sentiment analysis and implicit toxicity detection models to detect jailbreaks.
Identifying Data Leakage
Techniques like entity recognition and vector similarity analysis to identify data leakage.
Building a Monitoring System
How to build your own monitoring system to evaluate app safety and security over time.

Meet your instructor

  • Bernease Herman

    Data Scientist, WhyLabs

    Bernease Herman is a data scientist at WhyLabs. He is interested in the discovery and application of machine intelligence techniques in new areas, as well as the commercialization of advanced research.

Upcoming cohorts

  • Dates

    start now

Free