LLMs and Prompt Engineering

What You'll Learn

The course will cover Large Language Model and Prompt Engineering. It will start with a technical introduction to LLMs, highlighting its goal of building general-purpose NLP models, and grounding its key strengths and weaknesses with key training techniques (their data, their transformer / language modeling structure, and their method for tuning models to follow human instructions). It will also compare existing deployed LLMs, e.g., GPT-3.5, GPT-4, Bard, etc. Then, the course will dive into prompt engineering - methods for creating input instructions that can either enhance LLM performances on the desired tasks, or prompt more intuitive human interaction with LLMs.


Course Topics

  • Overview of NLP, Deep Learning, and the intuition behind LLMs: language modeling, (contextualized) embedding
  • Technical backbones of LLMs: Transformer architecture, training objectives, pre-training, instruction tuning, data curation
  • Compare and contrast existing off-the-shelf LLMs: the best use of BERT, T5, GPT
  • Case studies on real-world applications: GitHub Copilot, ChatGPT, Intelligent Tutor built by Khan Academy
  • Adapt LLMs to specific tasks: prompt engineering, prompt tuning, (parameter-efficient) fine tuning
  • Ethics and policies, and impacts: copyright, using LLMs to simulate and augment human labors