Mydra logo
Log in
Artificial Intelligence
Artificial Intelligence
Maven logo

Maven

Building LLM Applications

  • up to 7 weeks
  • Advanced
  • Cohort-based

Gain a thorough understanding of the world of Large Language Models with a deep understanding on how to build your own applications. This course will help you construct and deploy robust and effective models in real-world settings using Large Language Models.

  • NLP
  • Transformers
  • Machine Learning System Design
  • Semantic Search
  • Serverless Inference

Overview

This course is designed to introduce you to Large Language Models in deeper detail, covering Transformer Architecture and how to utilize Encoder and Decoder Models. You will learn to build retrieval systems and utilize Open Source LLMs to create RAG-based architectures. By the end of the course, you will have a comprehensive understanding of the end-to-end machine learning pipeline, enabling you to tackle practical machine learning problems and deliver results in production.

  • Web Streamline Icon: https://streamlinehq.com
    Online
    course location
  • Layers 1 Streamline Icon: https://streamlinehq.com
    English
    course language
  • Professional Certification
    upon course completion
  • Full-time
    course format
  • Live classes
    delivered online

Who is this course for?

AI Enthusiasts

You are intrigued about LLMs and would like to build applications powered by LLMs.

AI Developers

You are ready to deploy your own SOTA AI Models and like to see how they work.

Data Scientists

You want to go beyond Jupyter Notebook and develop batch or real-time prediction.

Why should you take this course?

Artificial Intelligence

Hamza Farooq

AI Specialist & Adjunct Professor

This course offers a comprehensive understanding of Large Language Models, enabling you to build and deploy robust models in real-world settings. Ideal for AI enthusiasts, developers, and data scientists, it covers essential topics like NLP, Transformers, and Semantic Search, helping you advance your career in AI.

Pre-Requisites

1 / 3

  • Knowledge of Python

  • Basic machine learning background

  • Familiarity with tools like VS Code, UNIX terminal, Jupyter Notebooks, and Conda package management

What will you learn?

Module 1: Introduction to NLP
Introduction to NLP: covers what NLP is, its history, applications and challenges. NLP Techniques: covers common techniques such as tokenization, part-of-speech tagging, named entity recognition and sentiment analysis with examples of their use. NLP Tools: introduces popular NLP tools such as NLTK, spaCy with examples of how to use them for basic NLP tasks.
Module 2: Foundational Knowledge of Transformers & LLM System Design
Transformers Foundational Knowledge: covers fundamental concepts of transformers, including self-attention, multi-head attention, and positional encoding. Introduction to Fundamental Concepts of ML System Design: covers the basics of designing machine learning systems, including data collection and preprocessing, model selection and training, and performance evaluation.
Module 3: Semantic Search
In this module, we will learn about retrieval systems and their significance in information retrieval. Discover popular methods such as Sparse vs. Dense Vectors, Euclidean Distance, Cosine Similarity, Approximate Nearest Neighbors (ANN), and practical coding using FAISS to achieve fast and precise search results.
Module 4: Creating a search engine from scratch
Building a Semantic Search Model: covers the basics of semantic search models, their architecture, and how they work. The session will focus on building a semantic search model for hotels using various natural language processing techniques. Deployment on a Serverless Inference: discusses the benefits of serverless computing and how to deploy the semantic search model on a serverless platform like Huggingface. Preprocessing of Hotel Data: covers the preprocessing of the hotel data using techniques like text cleaning, tokenization, stemming, and lemmatization, and how to convert the hotel data into embeddings that can be used by the semantic search model. Evaluation of the Model: discusses the different evaluation metrics used for semantic search models and how to measure the performance of the model using these metrics. This session will also cover techniques for improving the model's performance and optimizing the search speed. Discussion on Query Intent Models.
Module 5: The Generation Part of LLMs
In this module, we'll explore the fundamentals of RAG and their real-world applications, as well as dive into Langchain's concept of chunking and agents, seamlessly connecting retrievals to Gen AI.
Module 6: Prompt-tuning, fine-tuning and local LLMS
In this module, we will learn how to effectively engineer prompts, fine-tune language models, leverage the PEFT approach, and measure the success of their efforts using appropriate validation metrics.

What learners say about this course

  • Highly recommend this course if you want to get started with LLMs without wandering much around. This course helps to discern what can be done with LLMs and what not and more importantly how it can be done.

    Rushit

    Analytics Lead, Solutions Architect

  • The course provides a comprehensive introduction to the fundamentals of working with large language models (LLMs). It covers essential topics, including text embeddings, various similarity metrics, traditional techniques, and an understanding of transformer architecture. The course also guides you through using pretrained models, creating Retrieval-Augmented Generation (RAG) systems, and fine-tuning these models. The instructor is approachable and supportive, and the course includes significant team-based activities, allowing you to collaborate with students from diverse backgrounds. This makes the learning experience both practical and engaging. Overall, it is one of the top courses available for anyone looking to enter the field of LLMs. Highly recommended for those interested in gaining a solid foundation in this area.

    Hamza

    Data Scientist

  • I thoroughly enjoyed this course! Hamza's teaching in the foundation course really helped clear up any confusion, providing us with the understanding we need to confidently tackle various tasks and apply different technologies. It was great to learn about deployment and how we can take our apps into production. Overall, it was a fantastic experience!

    Sahar

    Data Scientist NLP/ML/AI

Meet your instructor

  • Hamza Farooq

    Founder, traversaal.ai

    Hamza Farooq is a founder by day and a professor by night. His work focuses on NLP and multi-modal systems. He created traversaal.ai to provide scalable LLM products for enterprises that can seamlessly integrate with existing ecosystems while being customizable and cost-efficient.

Upcoming cohorts

  • Dates

    Nov 3 — Dec 9, 2025
  • Dates

    Nov 17 — Dec 23, 2025

$1,900