Skip to article frontmatterSkip to article content

Module 1: Foundations of Prompt Engineering

Module Overview

Welcome to Module 1 of the Prompt Engineering curriculum! This module establishes the groundwork for our comprehensive journey. You’ll grasp not only what prompt engineering entails but also its significance—how the phrasing, structure, and precision of prompts influence the quality and applicability of AI outputs in fields like research, business, and creativity. We’ll also lay the foundation for understanding how model training (explored in Module 2) shapes effective prompting.

Our objectives are dual: First, define prompt engineering and trace its evolution as a critical discipline in AI. Second, learn to build, analyze, and refine prompts by examining their structure, essential frameworks, and the mechanics of large language models (LLMs)[2]. This module is designed for self-paced learning, allowing you to progress at your own speed.

Learning Objectives

By the end of this module, you should be able to:


What is Prompt Engineering?

Prompt engineering involves designing and optimizing input queries or instructions to elicit the most accurate, relevant, and valuable responses from AI models like LLMs[2]. Its importance has grown with LLMs’ integration into diverse applications, from customer service to scientific research and code generation. By carefully selecting words, providing context, and setting guidelines, users can minimize ambiguity, boost reliability, and foster better human-AI synergy, leveraging insights from model training (see Module 2)[2].


Anatomy of a Prompt

Prompts can be dissected into key components1:


The RACE Framework

The RACE framework has become widely adopted in recent prompt engineering practices 2:

This framework builds on earlier ones like CLEAR by emphasizing role-playing for enhanced specificity, helping users fill prompt gaps systematically and improve LLM performance across tasks, especially when aligned with instruction tuning (Module 2)[2].


How Large Language Models Process Prompts

LLMs tokenize inputs—breaking text into subword units—then use deep neural architectures (typically transformer-based) to predict and generate the next tokens to produce responses, shaped by their training (see Module 2)[5]. Parameters like temperature (controlling randomness: low for predictability, high for creativity) and top-p (nucleus sampling: restricting to probable tokens) influence variability. In 2025, with advanced models, understanding these allows fine-tuning for tasks, balancing determinism and innovation[5].


Common Prompt Patterns

Established patterns in prompt engineering include1:

Selecting appropriate patterns enhances accuracy, relevance, or creativity, forming a core skill in modern prompt engineering, especially when leveraging tuned models (Module 2)[1].


Suggested Reading & Viewing

  1. Vatsal, S., & Dubey, H. (2024). A survey of prompt engineering methods in large language models for different NLP tasks. arXiv. arXiv link
  2. Chen, B., Zhang, Z., Langrené, N., & Zhu, S. (2025). Unleashing the potential of prompt engineering for large language models. Patterns, 6(6), 101260. ScienceDirect link
  3. Google’s 9 Hour AI Prompt Engineering Course In 20 Minutes (YouTube, Tina Huang). YouTube link
  4. The ULTIMATE 2025 Guide to Prompt Engineering (YouTube, Dave Ying Tutorials). YouTube link
  5. Microsoft Learn: Understanding Prompt Engineering Fundamentals. Microsoft Learn link

Suggested Activities (Optional)