The course will cover Large Language Model and Prompt Engineering. It will start with a technical introduction to LLMs, highlighting its goal of building general-purpose NLP models, and grounding its key strengths and weaknesses with key training techniques (their data, their transformer / language modeling structure, and their method for tuning models to follow human instructions). It will also compare existing deployed LLMs, e.g., GPT-3.5, GPT-4, Bard, etc. Then, the course will dive into prompt engineering - methods for creating input instructions that can either enhance LLM performances on the desired tasks, or prompt more intuitive human interaction with LLMs.