Prompt Engineering FREE Curated Resources
BASIC VIDEO TUTORIALS
- Mastering Prompts: The Key to Getting What You Need from ChatGPT A short introductory video on prompt engineering. to get what you need from ChatGPT.
- ChatGPT for Everyone from learnprompting.org This one-hour, self-paced course provides an introduction to ChatGPT, a chatbot app built by OpenAI and powered by their models that can process text, image, and audio inputs You’ll learn how to set up an account, write your first prompt, and explore practical applications. Designed for beginners, this course requires no prior experience with AI https://learnprompting.org/courses/chatgpt-for-everyone
MASTERING PROMPTS
- Google Cloud—Prompt Engineering: Overview and Guide? This Google Cloud guide introduces prompt engineering as the practice of crafting precise, context-rich prompts to guide AI models—especially large language models—toward accurate, relevant, and safe outputs. It explains how well-structured prompts improve model performance, reduce bias, and enhance control and user experience. By combining clear instructions, examples, and contextual framing, prompt engineering becomes essential for getting reliable results from generative AI. The article also points to tools and resources, including best practices and a free Vertex AI trial, encouraging hands-on experimentation to refine prompt design and better align AI behavior with user intent. https://cloud.google.com/discover/what-is-prompt-engineering?hl=en
- Gemini for Google Workspace—Prompting Guide 101 This “Gemini for Google Workspace Prompting Guide 101” is a quick-start handbook offering practical strategies for crafting effective prompts when using Gemini across Workspace apps like Gmail, Docs, Sheets, Slides, and Meet. It introduces a four-part prompt framework—Persona, Task, Context, Format—and provides six core tips: use natural language; be specific and iterative; stay concise; treat prompting as a dialogue; leverage your own documents; and let Gemini improve your prompts. Packed with role-based scenarios and examples, the guide teaches users how to boost productivity and creativity while maintaining privacy and accuracy. https://services.google.com/fh/files/misc/gemini-for-google-workspace-prompting-guide-101.pdf
- Prompt Engineering by Lee Boonstra, Google Prompt Engineering guide dives deep into advanced techniques for guiding large AI models via API prompts. It covers zero‑, one‑, and few‑shot prompting; sampling controls like temperature, top‑K, and top‑P; structured prompting using system/context/role templates; and advanced reasoning patterns such as chain‑of‑thought, tree‑of‑thought, step‑back, self‑consistency, and ReAct. The guide also explores automating prompt refinement and code-oriented interactions (e.g., debugging, translation), plus practical best practices like clarity, specificity, dynamic variables, and iteration. Packed with real‑world examples and tool configurations, it’s a comprehensive reference for engineering reliable, and high‑quality LLM outputs. https://drive.google.com/file/d/1AbaBYbEa_EbPelsT40-vj64L-2IwUJHy/view
- GPT-4.1 Prompting Guide, OpenAI This article highlights effective prompting strategies for GPT-4.1, which improves on GPT-4o in coding, instruction following, and long-context understanding. Drawing from internal testing, it shares best practices like writing clear, specific prompts and including contextual examples. GPT-4.1 is more literal and instruction-sensitive than earlier models, so prompts may need adjustments to align behavior. Its high steerability means a single, direct sentence can often redirect output. The guide includes practical examples and stresses that prompt engineering is an empirical, iterative process. Success depends on experimentation, evaluation, and refinement to ensure outputs meet expectations in real-world applications. https://cookbook.openai.com/examples/gpt4-1_prompting_guide
- Anthrop\c Developer Platform—Prompt Engineering: This Anthropic guide offers a structured overview of prompt engineering for building with Claude. It begins by emphasizing the importance of defining success criteria and establishing empirical tests before refining prompts. The guide then outlines a progression of techniques—from using the prompt generator and templates to being clear, providing examples, enabling chain-of-thought reasoning, applying XML tags, assigning roles, pre‑filling responses, chaining complex prompts, and managing long contexts. It also highlights broader strategies, such as evaluating prompts, improving guardrails to reduce hallucinations, maintain consistency, and prevent prompt leaks. Overall, it frames prompt engineering as a powerful, efficient, and flexible alternative to fine-tuning. https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview
OPTIMIZING PROMPTS
- Anthropic prompt generator: This Anthropic guide details the Prompt Generator tool for Claude, designed to tackle the “blank page” problem by automatically creating high-quality prompt templates tailored to users’ tasks. Compatible across Claude models—including those with extended reasoning—it uses best practices like role-setting, chain-of-thought reasoning, XML-delimited variables, and prefilled examples to build precise templates. Users can access it via the Anthropic Console or a Colab notebook (API key needed), generating editable templates as a launchpad for developing and iterating prompts. It’s particularly effective for rapid prototyping, guiding prompt engineers—from newcomers to experts—toward best-in-class, production-ready prompt structures. https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/prompt-generator
PROMPT LIBRARY
- Anthropic prompt library: This Anthropic “Prompt Library” offers a curated collection of expertly crafted prompts for both personal and business use cases. Each entry includes a ready-to-use prompt template—such as “Corporate Clairvoyant” (summarize long reports into memos), “Motivational Muse” (generate personalized affirmations), or “Lesson Planner” (create structured lesson plans)—and a sample input and output to guide users. Designed for easy import into the Anthropic Console, these templates cover tasks like data analysis, creative writing, code help, mindfulness guidance, and more. Ideal for reducing experimentation time, the library serves as a springboard for prompt engineering—supporting rapid prototyping and consistent, high-quality interactions with Claude. https://docs.anthropic.com/en/resources/prompt-library/library
- ShumerPrompt Library: ShumerPrompt is a community-driven AI prompt library and generator that helps users discover, share, and improve prompts for models like ChatGPT, GPT‑4, and o3 Pro. With a searchable catalog of categorized, high-rated prompts—ranging from content creation to research and strategy—it enables rapid access to proven templates. Users can also generate new prompts automatically and refine them using the platform’s built‑in prompt improver. Featuring a clean interface, collaboration tools, a leaderboard, and a playground for testing prompts in real time, ShumerPrompt is ideal for developers, creators, marketers, and anyone aiming to streamline AI workflows. https://shumerprompt.com/
Prompt Tips
Prompt Framework (or Structure)
Tip 1 – Framework—R-T-F
R–T–F (Role – Task – Format)
Prompting Structure:
- Act as a (ROLE)
- Create a (TASK)
- Show as a (FORMAT)
✅ QA Applications:
1. ROLE (Act as a QA/Testing)
Choose a role grounded in the testing lifecycle.
Examples:
- Test Automation Engineer
- QA Lead
- Test Data Specialist
- Performance Tester
- AI QA Staff
- Bug Triage Facilitator
2. TASK (Create a testing artifact)
Focus on what you want to generate or analyze.
Examples:
- Requirement refinement
- Test Cases
- Test Plan Summary
- Bug Report
- Test Data Matrix
- Regression Test Checklist
- Risk-Based Prioritization List
3. FORMAT (Show as)
Define how results should be presented.
Examples:
- Table (e.g., test cases)
- Gherkin Syntax
- JSON Schema
- Step-by-Step Procedure
- Bullet List (e.g., acceptance criteria)
- Timeline (e.g., sprint testing phases)
Add Context (Input Details)
Feed in real artifacts:
- User stories or requirements
- Screenshots or UI descriptions
- API specs or endpoints
- Past defect trends or test coverage gaps
Set Guidelines (Constraints & Style)
Clarify what’s in/out of scope:
- Exclude exploratory tests
- Only include high-severity test cases
- Output in 10 lines or fewer
- Use concise titles only
🧪 Examples
🔹 Example 1: UI Functional Testing
Act as a Test Automation Engineer and create a set of Selenium-compatible test cases based on [this login form’s] behavior.
Show the results as a table with columns:
Test Case ID
,Description
,Steps
,Expected Result
.The login form accepts email/password, shows errors for invalid input, and locks after 5 failed attempts.
Use concise phrasing. Skip any performance or security tests.
🔹 Example 2: AI-Generated Test Data
Act as a Test Data Specialist and generate a boundary value test data set for a date-of-birth input field (valid range: Jan 1, 1900 to Dec 31, 2020).
Present as a JSON array of inputs, including valid/invalid edge cases.
Include context labels (
"valid"
,"too early"
,"too late"
) for each item.Keep the set under 10 items.
🔹 Example 3: Agile Test Planning
Act as a QA Lead and create a test coverage summary for this user story:
“As a user, I want to reset my password using email verification so I can regain access if I forget it.”
Skip non-functional and localization aspects. Use plain language.
Tip 2 – Framework—T-A-G
T–A–G (Task – Action – Goal)
Prompting Structure:
- Define the (TASK)
- State the (ACTION)
- Clarify the (GOAL)
✅ QA Example:
- TASK: Evaluate smoke test coverage for a new build
- ACTION: Act as a QA Lead and analyze the smoke suite for gaps
- GOAL: Ensure core functionality is verified before any full regression run
📥 Prompt:
The task is to evaluate smoke test coverage for the new build.
Act as a QA Lead and identify any missing core test scenarios.
Goal is to ensure critical functionality is always verified before running full regression.
Tip 3 – Framework—B-A-B
B–A–B (Before – After – Bridge)
Prompting Structure:
- Explain problem (BEFORE)
- State outcome (AFTER)
- Ask to (BRIDGE)
✅ QA Example:
- BEFORE: We’ve been releasing builds with untested edge cases
- AFTER: We want confidence that all key boundary and edge conditions are covered before release
- BRIDGE: Generate a checklist of edge case test scenarios based on current feature specs
📥 Prompt:
We’ve been missing edge case bugs in production.
We want to ensure all boundary and edge conditions are tested in QA before release.
Generate a checklist of edge case test scenarios based on this feature spec.
Tip 4 – Framework—C-A-R-E
C–A–R–E (Context – Action – Result – Example)
Prompting Structure:
- Give the (CONTEXT)
- Describe the (ACTION)
- Clarify the (RESULT)
- Give an (EXAMPLE)
✅ QA Example:
- CONTEXT: We’re onboarding a new test team to support our legacy insurance web app
- ACTION: Create a test strategy outline that balances exploratory testing and automation
- RESULT: Reduce onboarding time and boost defect detection rate in the first sprint
- EXAMPLE: A good example is Atlassian’s hybrid QA onboarding playbook
📥 Prompt:
We’re onboarding a new QA team for a legacy insurance app.
Create a test strategy outline that mixes exploratory testing and targeted automation.
The result should be faster onboarding and improved defect detection in Sprint 1.
Use Atlassian’s QA onboarding playbook as a reference.
Tip 5 – Framework—R-I-S-E
R–I–S–E (Role – Input – Steps – Expectation)
Prompting Structure:
- Specify the (ROLE)
- Describe the (INPUT)
- Ask for the (STEPS)
- Describe the (EXPECTATION)
✅ QA Example:
- ROLE: You are an AI QA Analyst
- INPUT: You’re given a fine-tuned LLM used to generate UI test cases
- STEPS: Outline the validation process to ensure test case quality and grounding
- EXPECTATION: Ensure at least 95% of generated tests meet acceptance criteria before integration
📥 Prompt:
You are an AI QA Analyst.
You’re validating a fine-tuned LLM that generates UI test cases.
Provide a step-by-step process to evaluate prompt effectiveness and test case quality.
Expectation is that 95%+ of outputs meet defined test acceptance criteria before deployment.
Tip 6 – Framework—E-R-A
E.R.A
- E – Expectation: Describe the desired result
- R – Role: Specify System’s role
- A – Action: Specify needed actions
🧪 QA Example:
- Expectation: A prioritized list of tests for a checkout flow
- Role: You are a QA Strategist
- Action: Analyze user stories and identify the top test cases by business risk and usage frequency
📥 Prompt:
I need a priority-based test list for our checkout flow.
You are a QA Strategist.
Review the feature description and suggest the top test cases based on usage and risk.
Tip 7 – Framework—R-A-C-E
R.A.C.E
- R – Role: Specify System’s role
- A – Action: Detail the necessary action
- C – Context: Provide situational details
- E – Expectation: Describe the expected outcome
🧪 QA Example:
- Role: Test Data Generator
- Action: Generate boundary value inputs for a credit card expiry date field
- Context: Acceptable date range is current month through next 5 years
- Expectation: 6–8 labeled test inputs (valid, invalid, expired)
📥 Prompt:
You are a Test Data Generator.
Create boundary value test inputs for a credit card expiration field.
Date range is from current month to 5 years in the future.
Return test inputs labeled as valid/invalid/expired.
Tip 8 – Framework—C-O-A-S-T
C.O.A.S.T
- C – Context: Set the stage
- O – Objective: Describe the goal
- A – Actions: Explain needed steps
- S – Steps: Describe the situation
- T – Task: Outline the task
🧪 QA Example:
- Context: A new payment gateway is being integrated
- Objective: Ensure no critical bugs in the payment flow
- Actions: Write and execute API and UI test cases
- Steps: Cover success/failure, edge cases (timeout, currency mismatch)
- Task: Design a comprehensive test suite
📥 Prompt:
We’re integrating a new payment gateway.
Goal is to ensure payment flow works with no critical bugs.
Write test cases to cover API and UI behavior including edge conditions.
Task: Design the test suite needed for go-live confidence.
Tip 9 – Framework—T-R-A-C-E
T.R.A.C.E
- T – Task: Define the task
- R – Role: Describe the System’s role
- A – Action: State the required action
- C – Context: Provide the situation
- E – Expectation: Illustrate with an example
🧪 QA Example:
- Task: Generate performance test scenarios
- Role: You’re a Performance Test Engineer
- Action: Create load and stress test plans for a job board site
- Context: Peak usage is 50k users/hour during major hiring events
- Expectation: Example: spike test during 5-minute window of 10x traffic
📥 Prompt:
Task: Generate load/stress test cases for a job board platform.
You’re a Performance Test Engineer.
Simulate peak loads around hiring spikes (50k users/hour).
Show an example spike test where traffic increases 10x over 5 mins.
Tip 10 – Framework—R-O-S-E
R.O.S.E.S
- R – Role: Specify System’s role
- O – Objective: State the goal or aim
- S – Steps: Describe the situation
- E – Expected Solution: Define the outcome
- S – Scenario: Ask for actions needed to reach the solution
🧪 QA Example:
- Role: You are an AI QA Coach
- Objective: Help testers improve test case quality
- Steps: Many test cases today are vague or miss key validation steps
- Expected Solution: A practical guide or checklist to coach them
- Scenario: Provide actions testers can take to write clearer, more effective test cases with AI support
📥 Prompt:
You are an AI QA Coach.
Goal is to improve how testers write test cases using the LLM.
Current issue: test cases lack clarity and miss validations.
Expected solution: A checklist or guide to improve them.
Scenario: What specific actions should testers take to use the LLM more effectively in writing strong test cases?
Prompt Pattern
Tip 1 – Pattern—Flipped Interaction
To use this pattern, your prompt should make the following fundamental contextual statements:
- I would like you to ask me questions to achieve X
- You should ask questions until condition Y is met or you can effectively achieve this goal
- (Optional) Ask me the questions one at a time, two at a time, ask me the first question, etc.
You will need to replace “X” with an appropriate goal, such as “creating a meal plan” or “creating variations of my marketing materials.” You should specify when to stop asking questions with Y. Examples are “until you have sufficient information about my audience and goals” or “until you know what I like to eat and my caloric targets.”
Examples:
- I would like you to ask me questions to help me create variations of my test planning materials. You should ask questions until you have sufficient information about my current draft requirements, user stories and goals. Ask me the first question.
- I would like you to ask me questions to help me diagnose a problem with my Internet. Ask me questions until you have enough information to identify the two most likely causes. Ask me one question at a time. Ask me the first question.
✅ 1. QA Test Case Variant Generation (Based on First Prompt)
Original:
I would like you to ask me questions to help me create…
QA Adaptation or Variation:
I would like you to ask me questions to help me generate variations of test cases based on a user story or feature description.
Your goal is to collect enough information about the feature behavior, input constraints, acceptance criteria, and edge cases so you can produce well-grounded test case variations.
Ask one question at a time until you have enough detail to proceed.
Ask me the first question.
🧪 Use Case: Great for refining test inputs before test case generation, especially when using LLM-powered AI assistants/agents to work from vague requirements or exploratory inputs.
✅ 2. QA Root Cause Exploration (Based on Second Prompt)
Original:
I would like you to ask me questions to help me diagnose a problem with my Internet…
QA Adaptation or Variation:
I would like you to ask me questions to help me diagnose a defect or test failure in our software.
Ask questions to narrow down the root cause, based on symptoms like error messages, environment behavior, and test results.
Your goal is to identify the two most likely causes of the issue.
Ask one question at a time to gather just enough context.
Ask me the first question.
🧪 Use Case: Ideal for debugging failed test runs, flaky tests, or failed CI builds — especially when combined with a log parser or trace summarizer assistant.
Tip 2 – Pattern—Refinement
- From now on, whenever I ask a question, suggest a better version of the question to use instead
- (Optional) Prompt me if I would like to use the better version instead Examples:
- From now on, whenever I ask a question, suggest a better version of the question to use instead
- From now on, whenever I ask a question, suggest a better version of the question and ask me if I would like to use it instead
Tailored Examples:
- Whenever I ask a question about dieting, suggest a better version of the question that emphasizes healthy eating habits and sound nutrition. Ask me for the first question to refine.
- Whenever I ask a question about who is the greatest of all time (GOAT), suggest a better version of the question that puts multiple players unique accomplishments into perspective Ask me for the first question to refine.
🛠️ General Instruction (QA-Specific Version)
From now on, whenever I ask a question related to software testing or QA, suggest a refined version of the question that improves clarity, adds relevant context (e.g., system type, test level, constraints), or improves the quality of test-related outputs.
Optionally, ask me if I’d like to use the improved version instead.
💡 Tailored QA Examples
🧪 Example 1: General QA Inquiry
Whenever I ask a question about testing, suggest a better version of the question that:
- Specifies the test type (e.g., functional, performance, security),
- Identifies the test artifact (e.g., UI, API, user story), and
- Helps elicit high-quality, actionable answers from LLM-powered AI assistants or agents. Ask me for the first testing question to refine.
🧪 Example 2: Test Case Design
Whenever I ask for help writing test cases, suggest a better version that clarifies:
- The input artifact (e.g., requirement, feature spec),
- The expected format (e.g., table, Gherkin),
- And the desired coverage (e.g., positive, negative, edge cases). Prompt me to rephrase it before generating test cases.
🧪 Example 3: Bug Reporting
Whenever I ask how to describe a bug, suggest a better version that includes:
- The environment and steps to reproduce,
- What was expected vs what occurred,
- And the severity or impact on users. Ask me if I want to use the improved version to create the report.
🧪 Example 4: Root Cause Exploration
Whenever I ask what caused a test failure or flaky behavior, suggest a better version that includes:
- The type of test (manual/automated),
- The symptoms or logs observed,
- And whether it was environment-, data-, or logic-related. Ask me if I want to proceed with the refined diagnostic prompt.
🧪 Example 5: LLM Prompt Tuning (for QA Use)
Whenever I ask an LLM-powered AI assistant to generate test cases, ask if I want to refine the prompt by specifying:
- The persona (e.g., “Act as a QA engineer”),
- Output format (e.g., markdown table),
- And input source (e.g., user story, UI spec, API response). Then, ask me if I want to continue with the improved version.
Tip 3 – Pattern—Cognitive Verifier
To use the Cognitive Verifier Pattern, your prompt should make the following fundamental contextual statements:
- When you are asked a question, follow these rules
- Generate a number of additional questions that would help more accurately answer the question
- Combine the answers to the individual questions to produce the final answer to the overall question
Examples:
- When you are asked a question, follow these rules. Generate a number of additional questions that would help you more accurately answer the question. Combine the answers to the individual questions to produce the final answer to the overall question.
Tailored Examples:
- When you are asked to create a recipe, follow these rules. Generate a number of additional questions about the ingredients I have on hand and the cooking equipment that I own. Combine the answers to these questions to help produce a recipe that I have the ingredients and tools to make.
- When you are asked to plan a trip, follow these rules. Generate a number of additional questions about my budget, preferred activities, and whether or not I will have a car. Combine the answers to these questions to better plan my itinerary.
🧠 Cognitive Verifier Pattern – QA-Specific Core Prompt
When you are asked a testing or QA-related question, follow these rules:
- Generate a number of clarifying sub-questions to gather additional context — such as system type, test level, constraints, environments, or formats.
- Use the answers to those sub-questions to refine your understanding of the problem.
- Combine the insights to produce a more accurate and targeted final response to the original question.
🧪 Tailored QA Examples
✅ Test Case Generation (UI/API Testing)
When you are asked to generate test cases, follow these rules:
- Generate clarifying questions about the feature behavior, input constraints, UI or API element types, and any expected output format (e.g., table, Gherkin).
- Use the answers to those questions to create more accurate and comprehensive test cases.
- Combine all insights into a structured test suite that aligns with best practices in QA.
✅ Defect Triage / Root Cause Diagnosis
When asked to help investigate a failing test case or defect, follow these rules:
- Ask diagnostic questions about the test environment, recent code changes, logs or stack traces, and whether it is reproducible or flaky.
- Use those answers to narrow down the most likely root causes.
- Combine findings into a summary of the top 1–2 causes and recommended next steps for investigation.
✅ Regression Risk Assessment
When asked to assess regression risk for a change, follow these rules:
- Generate questions about the affected features, test coverage, dependencies, and historical failure patterns.
- Use the responses to assess which areas are at risk.
- Combine this into a list of priority regression test areas for QA to focus on.
✅ QA Strategy Planning
When asked to design a QA strategy for a new product or feature, follow these rules:
- Ask questions about the project timeline, release frequency, test levels needed (unit, integration, UI), and team skill levels/tools available.
- Combine those answers into a strategy that includes test planning, automation, metrics, and risk areas.
✅ Test Data Design
When asked to generate test data, follow these rules:
- Ask about field constraints, data types, valid/invalid ranges, formatting rules, and edge case expectations.
- Use the answers to ensure test data is accurate, diverse, and realistic for the given test scenario.
🧰 Optional Follow-Up Prompt to Use During QA Practice
Would you like me to apply the Cognitive Verifier Pattern now?
If yes, please ask your testing question, and I’ll break it down with sub-questions before giving you a final, more accurate answer.
Tip 4 – Pattern—Audience Persona
To use this pattern, your prompt should make the following fundamental contextual statements:
- Explain X to me.
- Assume that I am Persona Y.
You will need to replace “Y” with an appropriate persona, such as “have limited background in computer science” or “a healthcare expert”. You will then need to specify the topic X that should be explained.
Examples:
- Explain large language models to me. Assume that I am a bird.
- Explain how the supply chains for US grocery stores work to me. Assume that I am Ghengis Khan.
👥 Audience Persona Pattern – QA-Specific Core Prompt
When creating a response, tailor it to the intended audience.
Follow these rules:
- Identify the audience persona (e.g., tester, developer, test lead, product manager).
- Adapt your language, level of detail, and format to suit that persona’s goals, knowledge level, and priorities.
- Present the final response in a way that is actionable and relevant for that specific audience.
🧪 Tailored QA Examples
✅ 1. For an Entry-level Tester
When explaining how to design boundary value test cases, tailor the response for a an entry-level tester.
Use simple language, provide concrete examples, and avoid heavy jargon.
Focus on helping them learn and apply the technique correctly with a real-world UI field like a date or price.
✅ 2. For a Developer
When describing a failed test case, tailor the response for a developer who needs to debug the issue.
Emphasize:
- Reproducibility (steps, environment, data)
- Logs or error messages
- Suspected root causesAvoid QA jargon and stick to concise, actionable info.
✅ 3. For a Product Manager
When summarizing test results, tailor the response for a product manager.
Highlight:
- High-level risk areas
- Blocker issues
- Feature readinessAvoid technical details or test design language unless necessary.Present status in terms of business impact and release risk.
✅ 4. For an AI Test Assistant
When writing a prompt to an AI test case generator, tailor your instruction for an LLM test assistant that needs clear context and strict output formats.
Include:
- The test level (UI, API)
- Desired output format (table, Gherkin, JSON)
- Any constraints (e.g., positive cases only)Use precise, structured language with no ambiguity.
✅ 5. For a QA Lead or Test Manager
When creating a test plan or coverage summary, tailor it for a QA lead.
Emphasize:
- Coverage gaps
- Risk-based prioritization
- Alignment with sprint goalsUse organized tables, tags, and traceability to requirements.
🔁 Practice Prompt for Testers
Whenever I ask a testing-related question, prompt me to identify the intended audience for the response (e.g., developer, product owner, junior tester, AI agent).
Then, suggest a version of the response tailored to that audience.
Tip 5 – Pattern—Game Play (For Learning, Retention and Practicing)
Game Play Pattern
- To use this pattern, include two key things in your prompt:
- Say: “Let’s play a game about X” or “Create a game for me about X.”
- Give at least one rule or how the game should work.
Replace X with what the game is about—like cooking, time travel, or solving mysteries. Then explain the rules, such as “Give me choices each turn” or “Give me a score based on how well I do.”
Examples:
- Make a cooking challenge game for one player. Each round, give me a random set of ingredients. I have to come up with a fun recipe idea. You rate my recipe from 1 to 10 based on creativity, flavor, and how well it might work. After a few rounds, tell me my final score and give me a cooking title based on how I did.
- Create a silly emoji guessing game. Each round, show me a set of 3–5 emojis that hint at a word, movie, or phrase. I have to guess what it means. If I get it right, give me a point. If I’m wrong, give me a funny hint. Let’s play 5 rounds and then give me my final score.
🎮 Game Play Pattern (QA-Specific Version)
Structure of the Prompt Pattern:
- Create a game around [Testing Topic] OR We are going to play a [QA skill] game
- Define the rules of the game (what I can do, how to progress, how scoring or feedback works)
- Start the interaction (e.g., first question, scenario, or challenge)
🧪 QA-Specific Game Examples
1. Test Case Coverage Explorer Game
Prompt:
Create a game for me called Test Coverage Explorer.
In this game, I will explore different features of an application, and for each one, you will challenge me to identify:
- One positive test case
- One negative test case
- One edge case
Give me points based on how complete or creative my test cases are.
Start with the first feature: “User Login”.
2. Bug Hunt Adventure
Prompt:
We’re going to play a Bug Hunt Adventure game.
You’ll show me screens, behaviors, or log snippets from a fictional app. My job is to:
- Spot functional bugs
- Classify the severity
- Suggest reproduction steps
Award points based on accuracy and completeness.
Show me the first buggy screen or behavior.
3. AI Test Prompt Debugger Game
Prompt:
Let’s play a game called Prompt Debugger Challenge.
You’ll show me a flawed test generation prompt. I have to:
- Spot what’s wrong with it (e.g., missing format, no persona)
- Rewrite it to improve it
For every good revision, give me feedback and a score.
Show me the first flawed prompt.
4. Requirement-to-Test Mapping Game
Prompt:
Create a game for me called Spec Decoder.
Each round, give me one line of a user story or requirement.
I have to:
- Write at least one matching test case
- Justify why it covers the requirement
If I miss edge cases or misinterpret the spec, deduct points.
Let’s start with the first story: “As a user, I want to upload a file with drag-and-drop support.”
5. QA Strategy Builder Game
Prompt:
We are going to play the QA Strategy Builder game.
Each round, you give me a mini project scenario (e.g., “two-week sprint with mobile-first UI and no automation in place”).
I have to:
- Choose the most effective test strategy
- Justify why it fits
Score my responses and help me refine my choices.
Start with the first scenario.
6. Traceability Tracker Game
Prompt:
Let’s play Traceability Tracker.
You’ll give me:
- A feature description
- A few test casesI need to:
- Map each test to the requirement
- Identify missing test coverage
Reward bonus points for finding coverage gaps.
Show me the first set.
7. Defect Detective Game
Prompt:
Start a game called Defect Detective.
In each round, show me:
- A vague or incomplete bug descriptionI have to:
- Reconstruct the missing information (environment, repro steps, severity, expected vs actual)
Rate how complete and accurate my reconstructed bug report is.
Give me the first defective bug entry.
🧰 Bonus Prompt to Activate the Pattern in an LLM-powered AI assistant
From now on, when I say “Let’s play a QA game,”
- Ask me which skill area I want to focus on (test case writing, bug analysis, prompt design, etc.)
- Choose or generate a game format based on that
- Explain the rules
- Start the first round
Tip 6 – Pattern—Template
To use this pattern, your prompt should make the following fundamental contextual statements:
- I am going to provide a template for your output
- X is my placeholder for content
- Try to fit the output into one or more of the placeholders that I list
- Please preserve the formatting and overall template that I provide
- This is the template: PATTERN with PLACEHOLDERS
You will need to replace “X” with an appropriate placeholder, such as “CAPITALIZED WORDS” or “<PLACEHOLDER>”. You will then need to specify a pattern to fill in, such as “Dear <FULL NAME>” or “NAME, TITLE, COMPANY”.
Examples:
- Create a random strength workout for me today with complementary exercises. I am going to provide a template for your output . CAPITALIZED WORDS are my placeholders for content. Try to fit the output into one or more of the placeholders that I list. Please preserve the formatting and overall template that I provide. This is the template: NAME, REPS @ SETS, MUSCLE GROUPS WORKED, DIFFICULTY SCALE 1–5, FORM NOTES
- Please create a grocery list for me to cook macaroni and cheese from scratch, garlic bread, and marinara sauce from scratch. I am going to provide a template for your output . <placeholder> are my placeholders for content. Try to fit the output into one or more of the placeholders that I list. Please preserve the formatting and overall template that I provide. This is the template: Aisle <name of aisle>: <item needed from aisle>, <qty> (<dish(es) used in>
To use this pattern, tell the AI three things in your prompt:
- “I’m giving you a template to follow.”
- “Here’s how to recognize where your answers should go—look for placeholders like X or <PLACEHOLDER>.”
- “Please fill in those spots and keep the same structure and formatting.”
Templates help you control how the AI formats its answers—great for making lists, letters, reports, or checklists.
Examples:
- Make a daily schedule for me. I’m giving you a template to follow. Use <TIME>, <ACTIVITY>, and <NOTES> as placeholders. Try to fit the output into this format and keep it neat. Template: < TIME > — < ACTIVITY > — < NOTES >
- Plan a movie night with friends. I’ll give you a format to use. Fill in the ALL CAPS placeholders. Template: MOVIE TITLE: GENRE SNACK: TYPE — PREP REQUIRED ACTIVITY BEFORE/AFTER: WHAT WE DO
- Write a packing list for a 3-day hiking trip. Use this structure and fill in the items in the right spots. Template: <DAY>: — CLOTHES: <LIST> — GEAR: <LIST> — FOOD: <LIST>
- Generate a weekly budget summary. Use this template with the placeholders I provide. Template: CATEGORY: <TYPE OF EXPENSE> SPENT: $<AMOUNT> NEEDS REVIEW: <YES/NO> NOTES: <EXTRA INFO>
To use this pattern, tell the AI three things in your prompt:
- “I’m giving you a template to follow.”
- “Here’s how to recognize where your answers should go—look for placeholders like X or <PLACEHOLDER>.”
- “Please fill in those spots and keep the same structure and formatting.”
Templates help you control how the AI formats its answers—great for making lists, letters, reports, or checklists
🧪 Template Pattern – QA-Specific Adaptation
🔧 How to Use (QA Edition)
I am going to provide a template for your output.
X
is my placeholder for content.Try to fit your response into the placeholders I list.
Please preserve the formatting and structure of the template I provide.
This is the template:
[YOUR QA TEMPLATE WITH PLACEHOLDERS]
🧪 Examples of QA Prompts Using the Template Pattern
✅ 1. Test Case Table Template
Prompt:
I’m going to provide a template for your output. X is my placeholder for content.
Try to fit the test cases into the placeholders I list.
Preserve the formatting of this table.
This is the template:
sql CopyEdit | TEST ID | SCENARIO | STEPS | EXPECTED RESULT | |---------|----------|-------|------------------| | X | X | X | X |
✅ 2. Gherkin Scenario Template
Prompt:
I will provide a template for your output. <PLACEHOLDER> is where you’ll insert content.
Keep the Gherkin formatting exactly as written.
This is the template:
php-template CopyEdit Feature: <FEATURE NAME> Scenario: <SCENARIO TITLE> Given <INITIAL CONTEXT> When <ACTION TAKEN> Then <EXPECTED OUTCOME>
✅ 3. Structured Bug Report Template
Prompt:
Use this structured template for output. My placeholders are all-caps.
Fit the bug report into the correct fields and preserve the layout.
This is the template:
vbnet CopyEdit BUG TITLE: TITLE TEXT ENVIRONMENT: DEVICE/OS/APP VERSION STEPS TO REPRODUCE: 1. STEP ONE 2. STEP TWO EXPECTED RESULT: EXPECTATION ACTUAL RESULT: ACTUAL BEHAVIOR SEVERITY: LEVEL
✅ 4. Test Data Schema Template
Prompt:
Use this test data template. <FIELD> is the placeholder to replace.
Maintain the structure for easy loading.
This is the template:
json CopyEdit { "test_name": "<TEST NAME>", "input": "<TEST INPUT>", "expected_output": "<EXPECTED OUTPUT>", "validation_type": "<ASSERTION TYPE>" }
✅ 5. Traceability Matrix Template
Prompt:
I am going to provide a template. X indicates placeholders for you to fill.
Stick to this table structure for traceability.
This is the template:
sql CopyEdit | REQUIREMENT ID | USER STORY | TEST CASE ID | COVERAGE STATUS | |----------------|------------|--------------|-----------------| | X | X | X | X |
🔁 Optional Follow-Up Instruction:
Whenever I provide a template, automatically apply the Template Pattern — preserve the structure, replace only placeholders, and never add extra formatting or chatty text.
Tip 7 – Pattern—Meta Language Creation
This pattern lets you define your own shorthand or symbolic language inside a prompt.
The core structure is:
When I say X
, I mean Y
(or I want you to do Y
).
Replace X
with your chosen word, symbol, or phrase, and map it to the intended meaning Y
.
Examples
- Custom keyword for variations
- Rule: When I say
variations(<something>)
, I mean: Give me ten different variations of<something>
. - Usage:
variations(company names for a company that sells software services for prompt engineering)
variations(a marketing slogan for pickles)
- Rule: When I say
- Custom notation for task dependencies
- Rule: When I say
Task X [Task Y]
, I mean: Task X depends on Task Y being completed first. - Usage:
- “Describe the steps for building a house using my task dependency language.”
- “Provide an ordering for the steps:
- Boil Water [Turn on Stove]
- Cook Pasta [Boil Water]
- Make Marinara [Turn on Stove]
- Turn on Stove [Go Into Kitchen]”
- Rule: When I say
🧪 Meta Language Creation Pattern – QA-Specific Adaptation
🔧 Core Instruction (QA Edition)
When I say X, I mean Y (i.e., I want you to interpret X as Y or perform Y).
Replace X with a symbol, tag, or shorthand I’ll use in prompts.
Replace Y with the expanded meaning or instructions.
🧪 QA-Specific Meta Language Examples
✅ 1. Test Variation Generator Macro
Definition:
When I say tc_variants(<scenario>), I mean generate 3–5 variations of test cases based on the given scenario (positive, negative, edge).
Usage:
tc_variants(user uploads file via drag-and-drop)
✅ 2. Dependency Language for Test Setup
Definition:
When I say Test X [Test Y], I mean Test X is dependent on Test Y (i.e., Test Y must pass or run first).
Usage:
Reset Password [Request Reset Email]
Confirm Email Link [Reset Password]
✅ 3. Output Schema Tags
Definition:
When I write :gherkin: in my prompt, generate test cases in Given–When–Then format.
When I write
:json:
, return test cases as JSON objects.When I write
:table:
, format as a Markdown table.
Usage:
Generate test cases for user login :gherkin:
Output boundary value test inputs for a phone number field :json:
✅ 4. Meta-Term for Test Gaps
Definition:
When I say gap_check(<test_suite>, <feature>), I want you to analyze the test suite and identify missing cases for the given feature.
Usage:
gap_check(test_cases_for_cart, checkout button state logic)
✅ 5. Test Oracle Shortcut
Definition:
When I say expect(<input>), I mean derive the expected output from the system under test for that input, with reasoning.
Usage:
expect({ “start_date”: “2025-01-01”, “end_date”: “2025-01-02” })
✅ 6. Bug Language Shortcut
Definition:
When I say bug(<symptom>, <module>), I mean generate a structured bug report that includes:
- Repro Steps
- Environment
- Severity
- Suggested Cause (if applicable)
Usage:
bug(“crash on form submit”, “user registration”)
✅ 7. Coverage Mapping DSL
Definition:
When I say map(<requirement>) -> <test_ids>, I mean create a traceability link between the requirement and its covering tests.
Usage:
map(REQ-1234) -> TC001, TC002, TC007
✅ How to Activate in GPT
When I define a meta language rule like X = Y, remember that rule for the rest of the session.
Apply it any time I use
X
.Ask me to clarify if you’re unsure about a new symbol or term.
🔁 Optional Teaching Prompt
Help me build a meta language for test case generation.
I’ll define custom shortcuts or syntax, and you’ll remember and apply them until I say “reset rules.”
Start by asking what kind of shorthand I want to use.
Tip 8 – Pattern—Recipe
To use this pattern, your prompt should make the following fundamental contextual statements:
- I would like to achieve X
- I know that I need to perform steps A,B,C
- Provide a complete sequence of steps for me
- Fill in any missing steps
- (Optional) Identify any unnecessary steps
You will need to replace “X” with an appropriate task. You will then need to specify the steps A, B, C that you know need to be part of the recipe / complete plan.
Examples:
- I would like to purchase a house. I know that I need to perform steps make an offer and close on the house. Provide a complete sequence of steps for me. Fill in any missing steps.
- I would like to drive to NYC from Nashville. I know that I want to go through Asheville, NC on the way and that I don’t want to drive more than 300 miles per day. Provide a complete sequence of steps for me. Fill in any missing steps.
🧪 Recipe Pattern – QA-Specific Adaptation
🔧 Core Prompt Format (QA Edition):
I would like to achieve [Testing Goal].
I know that I need to perform steps A, B, C.
Provide a complete sequence of steps for me.
Fill in any missing steps.
(Optional) Identify any unnecessary steps I may have included.
🧪 QA-Specific Examples of the Recipe Pattern
✅ 1. Manual Test Execution Process
Prompt:
I would like to manually execute a regression test suite for a web application.
I know that I need to:
- Log into the test environment
- Execute each test step-by-step
- Report bugs in the trackerProvide a complete sequence of steps.Fill in any missing tasks (like environment prep, test data setup).
✅ 2. Automation Pipeline Setup
Prompt:
I would like to set up a test automation pipeline for a React web app using Selenium and GitHub Actions.
I know I need to:
- Write test scripts
- Push them to GitHub
- Create a GitHub Actions workflowPlease fill in the complete setup process, including any missing configuration or dependency management steps.
✅ 3. AI-Generated Test Case Review Process
Prompt:
I would like to review AI-generated test cases for quality and correctness.
I know I need to:
- Validate coverage
- Check test format
- Compare expected resultsProvide a full review checklist and workflow.Fill in any other best practices I should follow during this review.
✅ 4. End-to-End QA Strategy for New Feature
Prompt:
I would like to build a complete QA strategy for a new payment gateway feature.
I know that I need to:
- Understand the business requirements
- Write test cases
- Execute both UI and API testsProvide a full plan that includes pre-testing, risk analysis, environment setup, and post-release validation.
✅ 5. Bug Lifecycle Process
Prompt:
I would like to understand the complete lifecycle of a bug from discovery to closure.
I know that I need to:
- Reproduce the issue
- Log it in the tracking system
- Retest once fixedPlease complete the full set of steps, including triage, assignment, severity classification, and regression testing.
🧰 Bonus Prompt for QA Teaching Assistants
I will give you a QA task and a few steps I know are required.
Your job is to apply the Recipe Pattern:
- Complete the full sequence
- Fill in missing phases or tasks
- Point out if I’ve listed something out of order or unnecessaryReady? Here’s the task…
Tip 9 – Pattern—Alternative Approaches
To use this pattern, do the following:
- If there are alternative ways to accomplish a task X that I give you, list the best alternate approaches
To use this pattern, your prompt should make the following fundamental contextual statements:
- (Optional) compare/contrast the pros and cons of each approach
- (Optional) include the original way that I asked
- (Optional) prompt me for which approach I would like to use
You will need to replace “X” with an appropriate task.
Examples:
- For every prompt I give you, if there are alternative ways to word a prompt hat I give you, list the best alternate wordings . Compare/contrast the pros and cons of each wording.
- For anything that I ask you to write, determine the underlying problem that I am trying to solve and how I am trying to solve it. List at least one alternative approach to solve the problem and compare / contrast the approach with the original approach implied by my request to you.
🧪 Alternative Approaches Pattern – QA-Specific Adaptation
🔧 Core Prompt Format (QA Edition)
If there are alternative ways to accomplish the testing task X that I give you,
- List the best alternative approaches
- (Optional) Compare and contrast the pros and cons of each approach
- (Optional) Include the original way I asked
- (Optional) Ask me which one I’d like to proceed with
🧪 QA-Specific Examples Using This Pattern
✅ 1. Task: Test Login Functionality
Prompt:
I want to test the login functionality of a web application.
If there are alternative approaches to test it, list them.
Compare their pros and cons.
Ask me which one to proceed with.
LLM Might Respond:
- Approach A: Manual exploratory testing of the UI
- ✅ Great for usability and visual validation
- ❌ Not repeatable, time-consuming
- Approach B: Automated UI testing using Selenium
- ✅ Repeatable and scalable
- ❌ Maintenance overhead, flaky selectors
- Approach C: API-level authentication testing
- ✅ Fast, less brittle
- ❌ Doesn’t test front-end interactions
➤ Which approach would you like to proceed with?
✅ 2. Task: Generate Test Data for Date Field
Prompt:
I need test data for a date_of_birth field.
List alternative approaches to generate it and explain when each is most appropriate.
LLM Might Respond:
- Approach A: Manual boundary value selection
- Good for small, focused suites
- Approach B: Use of Faker or synthetic data libraries
- Great for bulk realistic test data
- Approach C: Use AI to generate context-aware edge cases
- Useful for intelligent stress scenarios
Original request: Manual generation
Suggest using Faker if scale is important. Would you like to switch?
✅ 3. Task: Review AI-Generated Test Cases
Prompt:
I want to review AI-generated test cases.
What are the alternative ways I can validate their correctness?
LLM Might Respond:
- Approach A: Manual review against requirements (original)
- Approach B: Cross-validate against a traceability matrix
- Approach C: Pair AI output with historical bug patterns to identify gaps
Pros/cons include accuracy, coverage, and effort.
➤ Choose your preferred validation path.
✅ 4. Task: Perform Regression Testing
Prompt:
I need to perform regression testing after a major release.
Show me alternative approaches and their trade-offs.
LLM Might Respond:
- Approach A: Full manual regression suite
- ✅ Thorough, human-verified
- ❌ Time-intensive
- Approach B: Selective regression using risk-based prioritization
- ✅ Fast, business-aligned
- ❌ May miss low-risk bugs
- Approach C: Automation-driven smoke + focused exploratory
- ✅ Hybrid balance of speed and depth
- ❌ Requires strong tooling and skills
✅ 5. Task: Set Up Test Automation for a Mobile App
Prompt:
I want to automate tests for a mobile app.
List the best frameworks and strategies I could use.
Include trade-offs.
LLM Might Respond:
- Appium: Cross-platform, open source
- Detox: Fast for React Native
- XCUITest / Espresso: Platform-native, stable
- Pros/cons include learning curve, community, and maintenance
Which one fits your tech stack and goals?
🧰 Optional System Prompt Add-on for Assistants
For every QA task or testing prompt I give you:
- If there are alternative ways to solve it, list the top 2–3
- Compare them briefly
- Ask if I want to proceed with one or explore further
Tip 10 – Pattern—Ask for input
Ask for Input Pattern
To use this pattern, your prompt should make the following fundamental contextual statements:
- Ask me for input X
You will need to replace “X” with an input, such as a “question”, “ingredient”, or “goal”.
Examples:
- From now on, I am going to cut/paste email chains into our conversation. You will summarize what each person’s points are in the email chain. You will provide your summary as a series of sequential bullet points. At the end, list any open questions or action items directly addressed to me. My name is Jill Smith. Ask me for the first email chain.
- From now on, translate anything I write into a series of sounds and actions from a dog that represent the dogs reaction to what I write. Ask me for the first thing to translate.
🧪 Ask for Input Pattern – QA-Specific Adaptation
🔧 Core Format (QA Edition)
Ask me for input X (e.g., a user story, failed test log, bug report, test data schema, acceptance criteria, etc.).
Once I provide the input, proceed with the corresponding task (e.g., generate test cases, suggest fixes, analyze logs, etc.).
🧪 QA-Specific Prompt Examples
✅ 1. Test Case Generator Assistant
Prompt:
From now on, you will generate test cases based on any user story or feature description I give you.
Format the output as a table with: Test ID, Scenario, Steps, Expected Result.
Ask me for the first user story.
✅ 2. Bug Report Rewriter
Prompt:
From now on, you’ll take any bug descriptions I send and reformat them into a structured bug report template.
Include: Title, Environment, Repro Steps, Expected vs Actual, and Severity.
Ask me for the first bug description.
✅ 3. Log Analysis Assistant
Prompt:
From now on, you’ll analyze test failure logs or CI output to identify probable causes.
Summarize what you find in plain language and suggest the next troubleshooting step.
Ask me for the first log snippet.
✅ 4. Test Data Synthesizer
Prompt:
From now on, you’ll generate structured test data in JSON or CSV format based on field definitions I provide.
Ask me for the first test data schema or field spec.
✅ 5. Traceability Mapper
Prompt:
You will map test cases to requirements to ensure full coverage.
Ask me for the first requirement or user story to trace.
✅ 6. Prompt Evaluator (Meta QA)
Prompt:
From now on, you’ll critique my test generation prompts and suggest better versions with improved clarity and format control.
Ask me for the first prompt to review.
✅ 7. QA Checklist Generator
Prompt:
You will generate a detailed QA checklist for any feature or workflow I describe.
Ask me for the first feature description or workflow.
✅ 8. Failure Root Cause Advisor
Prompt:
From now on, when I give you a failed test case, you’ll ask clarifying questions and suggest the top two likely causes.
Ask me for the first failed test case to investigate.
✅ Optional Meta Prompt for Teaching Mode
Prompt:
From now on, we are practicing prompt engineering for testing use cases.
Each time, ask me what kind of QA task I want help with (test case generation, bug rewriting, traceability, etc.).
Then prompt me for the input needed to complete the task.
Tip 11 – Pattern—Outline expansion
To use this pattern, your prompt should make the following fundamental contextual statements:
- Act as an outline expander.
- Generate a bullet point outline based on the input that I give you and then ask me which bullet point you should expand on.
- Create a new outline for the bullet point that I select.
- At the end, ask me what bullet point to expand next.
- Ask me what to outline.
Examples:
- Act as an outline expander. Generate a bullet point outline based on the input that I give you and then ask me for which bullet point you should expand on. Each bullet can have at most 3–5 sub bullets. The bullets should be numbered using the pattern [A-Z].[i-v].[* through ****]. Create a new outline for the bullet point that I select. At the end, ask me what bullet point to expand next. Ask me what to outline.
🧪 Outline Expansion Pattern – QA-Specific Adaptation
🔧 Core Prompt Format (QA Edition)
Act as an outline expander for QA and testing topics.
Based on the input I give you, generate a high-level bullet point outline with numbered structure.
Then ask me which bullet point to expand.
Expand only that bullet into a sub-outline (3–5 subpoints).
After each expansion, ask me which point to expand next.
Start by asking: “What testing or QA topic should I outline for you?”
🧪 QA-Specific Prompt Example
✅ Prompt to Use:
Act as an outline expander for software testing.
Generate a bullet point outline based on the input I give you and then ask me which bullet to expand on.
Each bullet can have at most 3–5 sub-bullets.
Number bullets using:
A.
,A.i
,A.i.*
When I select a bullet, expand just that section.
At the end of each expansion, ask me what to expand next.
Ask me what testing or QA topic you should outline.
🧪 Sample Use Cases in Testing Context
✅ Use Case 1: Creating a QA Strategy
Input:
Outline a QA strategy for a mobile e-commerce app.
GPT Might Return:
mathematica
CopyEdit
A. Test Planning
B. Test Design Techniques
C. Automation Strategy
D. Performance & Load Testing
E. Post-release Monitoring
Then asks:
Which bullet would you like me to expand?
✅ Use Case 2: Expanding a Bug Lifecycle
Input:
Outline the full bug lifecycle in JIRA.
GPT Might Return:
mathematica
CopyEdit
A. Bug Identification
B. Bug Reporting
C. Bug Triage
D. Bug Fix and Retest
E. Closure and Regression
→ Expanding B might yield:
css
CopyEdit
B.i. Write clear title and summary
B.ii. Add steps to reproduce
B.iii. Attach environment and logs
B.iv. Assign severity and priority
✅ Use Case 3: Planning Test Coverage for a Feature
Input:
Outline the test coverage areas for a password reset feature.
GPT Might Return:
mathematica
CopyEdit
A. Input Validation
B. Email Trigger Behavior
C. Token Expiry and Validation
D. Reset Form UX and Validation
E. Security Edge Cases
→ You pick C
, and it expands:
mathematica
CopyEdit
C.i. Token creation and length
C.ii. Token expiration timing
C.iii. Single-use validation
C.iv. Handling of expired tokens
C.v. Invalid or tampered tokens
✅ Use Case 4: Test Automation Stack Planning
Input:
Outline an automation architecture for a hybrid mobile/web app.
GPT Might Return:
mathematica
CopyEdit
A. Framework Selection
B. Test Layer Strategy
C. CI/CD Integration
D. Test Data and Environment Management
E. Reporting and Analytics
🧰 Optional System Prompt for Persistent Expansion Mode
From now on, act as a QA outline expander.
Each time I give you a topic, create a structured outline.
Wait for me to choose a bullet to expand.
Repeat the process recursively.
Always ask: “What would you like me to outline next?”
Tip 12 – Pattern—Menu Action
To use this pattern, your prompt should make the following fundamental contextual statements:
- Whenever I type: X, you will do Y.
- (Optional, provide additional menu items) Whenever I type Z, you will do Q.
- In the end, you will ask me for the next action.
You will need to replace “X” with an appropriate pattern, such as “estimate <TASK DURATION>” or “add FOOD”. You will then need to specify an action for the menu item to trigger, such as “add FOOD to my shopping list and update my estimated grocery bill”.
Examples:
- Whenever I type: “add FOOD”, you will add FOOD to my grocery list and update my estimated grocery bill. Whenever I type “remove FOOD”, you will remove FOOD from my grocery list and update my estimated grocery bill. Whenever I type “save” you will list alternatives to my added FOOD to save money. In the end, you will ask me for the next action. Ask me for the first action.
🧪 Menu Actions Pattern – QA-Specific Adaptation
🔧 Core Prompt Format (QA Edition)
Whenever I type: X, you will do Y.
(Optional:) Whenever I type:
Z
, you will doQ
.At the end of each action, ask me for the next command.
Ask me for the first command to begin.
🧪 QA-Specific Prompt Example
Whenever I type: add TEST CASE, you will ask me for the scenario and generate a structured test case in table format.
Whenever I type:
review BUG
, you will ask me for a bug description and format it into a structured bug report.Whenever I type:
analyze FAILURE
, you will ask for the failure logs and suggest likely root causes.Whenever I type:
build CHECKLIST
, you will ask for a feature or workflow and create a QA checklist.Whenever I type:
generate TEST DATA
, you will ask for the data fields and return a small, valid test dataset in JSON or table format.Whenever I type:
save SUMMARY
, you will summarize the session so far, including all test cases, bugs, failures, and checklists.After each action, you will ask me: “What would you like to do next?”
Ask me for the first command to begin.
🧪 Example User Interaction (Once Pattern Is Active)
User: add TEST CASE
GPT: Great! Please describe the scenario or user story you want a test case for.
User: review BUG
GPT: Please paste the bug description or reproduction notes, and I’ll format it into a structured bug report.
User: save SUMMARY
GPT: Here’s a summary of everything you’ve done so far:
- 3 Test Cases
- 1 Bug Report
- 1 Checklist Would you like to export or continue? → What would you like to do next?
🧰 Bonus: Custom QA Menu as Reference
sql
CopyEdit
Menu Actions:
- add TEST CASE → Generate test case from scenario
- review BUG → Format and triage a bug description
- analyze FAILURE → Investigate test logs
- build CHECKLIST → Create checklist from feature or flow
- generate TEST DATA → Produce test data in JSON/table
- save SUMMARY → Summarize session activity
Tip 13 – Pattern—Fact check list
To use this pattern, your prompt should make the following fundamental contextual statements:
- Generate a set of facts that are contained in the output
- The set of facts should be inserted at POSITION in the output
- The set of facts should be the fundamental facts that could undermine the veracity of the output if any of them are incorrect
You will need to replace POSITION with an appropriate place to put the facts, such as “at the end of the output”.
Examples:
- Whenever you output text, generate a set of facts that are contained in the output. The set of facts should be inserted at the end of the output. The set of facts should be the fundamental facts that could undermine the veracity of the output if any of them are incorrect.
🧪 Fact Check List Pattern – QA-Specific Adaptation
🔧 Core Prompt Format (QA Edition)
Whenever you generate a QA-related response (e.g., test cases, bug analysis, strategy), also generate a fact check list.
The fact check list should be inserted at the end of the output.
The facts should represent key assumptions or inputs that, if incorrect, would undermine the validity of the response.
🧪 QA-Specific Prompt Example
When you output test cases, generate a fact check list at the end.
The fact check list should include any assumptions about:
- Field validation rules
- Feature behavior
- Input constraints
- User roles or states
- Expected system behaviorThese facts must be true for the test cases to be valid.If any are incorrect, the test suite may be misleading or flawed.
🧪 Sample Use Case
✅ User Prompt:
Generate test cases for a password reset feature that uses an email-based verification link.
✅ GPT Output:
Test Cases:
pgsql
CopyEdit
1. Valid email triggers password reset link
2. Invalid email shows error message
3. Expired token returns “Link expired” message
4. Successful password reset allows login with new credentials
✅ Fact Check List (Appended at End):
- The system uses email as the primary method for password reset.
- The reset link expires after a set time window (e.g., 15 minutes).
- The system prevents reuse of password reset tokens.
- Users must be registered and verified before initiating a reset.
- Login is blocked until the reset process is completed.
🧰 Additional QA Scenarios Using This Pattern
✅ Bug Report Analyzer
When you summarize a defect, insert a fact check list that highlights critical facts assumed in your diagnosis (e.g., environment, reproduction conditions, error logs).
✅ Strategy Recommendation
When you propose a QA strategy, include a fact check list for project constraints, team size, sprint cadence, and automation readiness.
These facts should be true for the strategy to apply.
✅ Failure Diagnosis
When analyzing a flaky test, add a fact check list of assumptions such as:
- Selector stability
- Test data consistency
- Network stability
- CI configuration
🧰 Optional Instruction for Persistent Use
From now on, after every QA-related response, automatically include a Fact Check List at the end.
List 3–5 core assumptions that, if incorrect, would compromise the output’s validity.
Tip 14 – Pattern—Tail Generation
To use this pattern, your prompt should make the following fundamental contextual statements:
- At the end, repeat Y and/or ask me for X.
You will need to replace “Y” with what the model should repeat, such as “repeat my list of options”, and X with what it should ask for, “for the next action”. These statements usually need to be at the end of the prompt or next to last.
Examples:
- Act as an outline expander. Generate a bullet point outline based on the input that I give you and then ask me for which bullet point you should expand on. Create a new outline for the bullet point that I select. At the end, ask me what bullet point to expand next. Ask me what to outline.
- From now on, at the end of your output, add the disclaimer “This output was generated by a large language model and may contain errors or inaccurate statements. All statements should be fact checked.” Ask me for the first thing to write about.
🧪 Tail Generation Pattern – QA-Specific Adaptation
🔧 Core Prompt Format (QA Edition)
At the end of your output, repeat Y and/or ask me for X.
Replace Y with something to be reiterated (e.g., a checklist, test data format, list of actions).
Replace X with the next action the user should take or provide.
🧪 QA-Specific Prompt Examples
✅ 1. Test Case Generator with Continuation Prompt
Prompt:
You will generate a table of structured test cases based on my input.
At the end, ask me what feature or scenario to generate test cases for next.
Tail:
➤ What feature or scenario would you like me to generate test cases for next?
✅ 2. Bug Triage Assistant with Summary Reminder
Prompt:
From now on, when you analyze a bug report, end the output with a summary of key fields: Severity, Reproducibility, Affected Module.
Then, ask me for the next bug to review.
Tail:
Summary: Severity = High, Reproducibility = 100%, Affected Module = Cart Checkout
➤ Please send the next bug to triage.
✅ 3. QA Strategy Builder with Progress Tracker
Prompt:
Build QA strategy outlines based on my input. At the end of each output, repeat the current list of completed sections, and ask me which section to expand next.
Tail:
Completed Sections:
- Test Objectives
- Risk-Based Prioritization➤ What section would you like to expand next?
✅ 4. Log Analyzer with Fact Reminder
Prompt:
When you analyze test logs or CI failures, include a short fact check list of assumptions.
At the end, ask me for the next log or failure to analyze.
Tail:
Fact Check:
- Assumes Chrome 120+
- Network latency not exceeding 300ms➤ Please provide the next log or failure.
✅ 5. Prompt Refinement Coach with Suggested Reuse
Prompt:
You will help me improve my test generation prompts.
At the end of each revision, repeat the improved prompt and ask me if I want to reuse it, revise further, or try a new one.
Tail:
Suggested Prompt: “Generate 5 boundary test cases for a phone number field with numeric and length constraints.”
➤ Would you like to reuse, revise, or start a new one?
🧰 Optional System Instruction for Persistent Use
From now on, at the end of your QA-related outputs, summarize key takeaways (Y) and ask what I’d like to do next (X).
Always include a clear continuation or decision point.
Tip 15 – Pattern—Semantic Filter
To use this pattern, your prompt should make the following fundamental contextual statements:
- Filter this information to remove X
You will need to replace “X” with an appropriate definition of what you want to remove, such as. “names and dates” or “costs greater than $100”.
Examples:
- Filter this information to remove any personally identifying information or information that could potentially be used to re-identify the person.
- Filter this email to remove redundant information.
🧪 Semantic Filter Pattern – QA-Specific Adaptation
🔧 Core Prompt Format (QA Edition)
Filter this information to remove X, where X is a QA-specific category like:
- Internal debug noise
- Redundant test steps
- Personally identifiable data in logs
- Non-functional or irrelevant test cases
- Low-priority issues from bug lists
🧪 QA-Specific Prompt Examples
✅ 1. Clean Up Test Steps
Prompt:
Filter this test case to remove redundant or obvious UI steps (e.g., “click into field”, “move mouse”).
Keep only essential test actions that reflect logic or state changes.
✅ 2. Bug Report Sanitization
Prompt:
Filter this bug report to remove developer-specific jargon and reword it for a cross-functional team.
Strip out unnecessary internal comments or debug notes.
✅ 3. Log File Cleaner
Prompt:
Filter this log file to remove repeating messages, irrelevant INFO logs, and any timestamps.
Keep only WARN, ERROR, or stack traces that relate to the failure.
✅ 4. Test Suite Slimmer
Prompt:
Filter this test suite to remove low-value or duplicate test cases.
Only keep unique cases that test a distinct condition or path.
✅ 5. PII Removal for QA Screenshots or Logs
Prompt:
Filter this test data to remove any personally identifying information (PII) like usernames, emails, phone numbers, or IP addresses.
✅ 6. Failure Analysis Refiner
Prompt:
Filter this failure analysis to remove guesses or speculative language.
Keep only evidence-based observations from logs or test artifacts.
✅ 7. Checklist Simplifier
Prompt:
Filter this QA checklist to remove any non-actionable items or tasks that are out of project scope.
✅ 8. User Story Coverage Mapper
Prompt:
Filter this list of test cases to remove those that don’t directly trace back to the user story or acceptance criteria.
🧰 Optional Persistent Instruction for QA Use
From now on, when I provide QA-related content (test cases, logs, bug reports),
- Ask me if I want to filter anything.
- If yes, ask: “What would you like me to filter out?”
- Then apply the Semantic Filter Pattern to clean the content accordingly.
Tip 16 – Patterns—Google’s Gemini
*Reference: https://ai.google.dev/gemini-api/docs/prompting-strategies*
GEMINI Common Prompt Patterns for LLMs
1. QA Persona Pattern
Intent: Make the LLM act in a role that matches a QA context — e.g., senior test automation engineer, security tester, UX QA specialist — so it delivers insights aligned with that persona.
Structure:
Act as a [QA role/persona]. [Your task/question].
Examples:
- “Act as a senior test automation engineer. Design Selenium test cases for the login flow of an e-commerce site.”
- “As a performance tester, explain how you’d identify and address bottlenecks in an API handling 10k requests per second.”
- “You are a strict accessibility auditor. Review the following webpage HTML for WCAG compliance.”
2. Instruction/Constraint Pattern for QA
Intent: Define precise test requirements, coverage constraints, or output formats so responses match the QA documentation standards.
Structure:
Use keywords like “Only,” “Ensure,” “Must,” or “Format as”. Use delimiters for clarity.
Examples:
- “List exactly 5 high-priority test cases for checkout functionality. Each must include Preconditions, Steps, Expected Results, and Priority.”
- “Instructions:
- Analyze the given bug report for completeness.
- Identify missing details based on QA best practices.
- Suggest improvements. Bug Report:
The payment form doesn’t work on mobile.
“
3. QA Chain-of-Thought (CoT) Pattern
Intent: Ensure step-by-step reasoning in test design, defect diagnosis, or coverage analysis.
Structure:
Use prompts like “Let’s think step by step” or “Explain your reasoning.”
Examples:
- “Identify potential causes for intermittent API failures. Let’s think step by step starting from server logs, then network, then code changes.”
- “Design regression tests for a new search feature. Work through functional, negative, and performance aspects systematically.”
4. Few-Shot/One-Shot QA Prompting
Intent: Provide example test cases, bug reports, or QA outputs so the LLM follows the same style.
Structure:
“Here are examples: [Example Input → Example Output]… Now do this: [New Input].”
Examples:
- Example: Input:
"Search for 'shoes' in mobile app"
→ Output:"Verify search returns relevant results within 2 seconds. Expected: 10 most relevant items displayed."
Now:"Search for 'hats' in desktop app"
- “Here’s a formatted bug report. Now generate one for the following defect: checkout button unresponsive on Safari.”
5. QA Question Refinement Pattern
Intent: Improve the clarity of QA-related questions to get better, more targeted answers.
Structure:
“Whenever I ask a QA question, suggest a more specific version that includes scope, environment, and expected outcome.”
Examples:
- “How do I test a login page?” → “Do you want functional, security, performance, or usability testing? Which browsers and devices should be included?”
- “What’s the best way to write test cases?” → “Are you writing for manual testers, automation scripts, or both? Which framework?”
6. Audience Persona Pattern for QA
Intent: Adjust explanations depending on whether the audience is a developer, product owner, QA intern, or end-user.
Structure:
“Explain [QA concept] as if I am [audience type/level].”
Examples:
- “Explain API mocking to me as if I’m a junior QA intern with no coding background.”
- “Describe the value of regression testing to a non-technical product owner.”
7. Reflective/Self-Correction Pattern for QA
Intent: Review and improve test cases, defect reports, or QA documentation for completeness and accuracy.
Structure:
“Review the following [test case/bug report] and [critique/improve/suggest fixes] based on QA best practices.”
Examples:
- “Review these test cases for coverage gaps and missing edge cases: [cases]”
- “Check this bug report for reproducibility and clarity, then rewrite it for better developer understanding.”
8. Semantic Filter Pattern for QA
Intent: Extract only relevant QA information from logs, specs, or reports.
Structure:
“From the following, extract all [QA-specific info] and format as [X].”
Examples:
- “From these server logs, extract all error codes and their timestamps.”
- “From the release notes, list only the features that require regression testing.”
9. Meta Language Creation Pattern for QA
Intent: Define shorthand for complex QA requests to save time in repeated conversations.
Structure:
“When I say ‘[X]’, I mean ‘[Full QA instruction]’.”
Examples:
- “When I say ‘GEN-BUG’, create a complete bug report template filled in with my notes.”
- “Whenever I type ‘COV-MATRIX’, generate a test coverage matrix for the given feature set.”
10. Scenario/Simulation Pattern for QA
Intent: Simulate real-world QA testing, bug triage meetings, or exploratory test sessions.
Structure:
“Let’s simulate [scenario]. You are [role], I am [role]. [Start scenario].”
Examples:
- “Let’s simulate a defect triage meeting. You are the QA lead, I am the developer. Here’s the first bug: login form fails on Edge browser.”
- “Pretend you are a tester running exploratory tests on a new chat app. Report any issues you encounter.”
Tip 17 – Applied Patterns to Testing & QA
Here’s a high-impact list of prompt patterns you should consider using — especially for your work learning and applying AI in Testing & QA. I’ve grouped them by purpose so you can mix, match, and teach them with intention.
(See Building Applied-patterns GPT example)
🔁 Prompt Refinement & Clarification Patterns
Pattern | Purpose |
---|---|
Cognitive Verifier | Breaks down ambiguous prompts into sub-questions to get better answers |
Prompt Rewriter | Suggests improved versions of user prompts for clarity, specificity, and structure |
Audience Persona | Tailors responses to specific roles (tester, dev, PM, AI assistant) |
Clarification Loop | GPT asks follow-up questions before responding |
Prompt Debugger | GPT explains why a prompt isn’t giving good results and how to fix it |
🧪 Test Case Generation Patterns
Pattern | Purpose |
---|---|
Role-Task-Format (RTF) | Set assistant role, define QA task, and specify output format |
Scenario Coverage | Expands a user story or feature into multiple test categories (positive, negative, edge) |
Test Matrix Builder | Organizes test cases across dimensions (e.g., platforms, browsers, user roles) |
Parametric Test Expansion | Takes one test and generates permutations based on input parameters |
Gherkin Formatter | Forces output to be in Given–When–Then format for BDD |
🛠️ Diagnostic & Planning Patterns
Pattern | Purpose |
---|---|
Root Cause Explorer | GPT asks questions to investigate flaky tests, failures, or system issues |
Failure Pattern Analyzer | GPT summarizes patterns from multiple test logs or bug reports |
QA Strategy Coach | Builds or critiques a QA approach based on constraints, tools, or coverage needs |
Risk-Based Filter | Prioritizes test cases or modules based on business impact and change volatility |
📦 Structured Output Enforcement Patterns
Pattern | Purpose |
---|---|
Template Enforcer | Forces output to match a strict format (table, JSON, YAML, etc.) |
Schema-Aware Generator | GPT uses a predefined output schema to validate format compliance |
Instruction-First Format | Embeds output rules directly in the system or user prompt for test data, test plans, etc. |
🧠 Thinking Assistant Patterns
Pattern | Purpose |
---|---|
Chain-of-Thought | Forces GPT to show reasoning step-by-step before answering |
Few-Shot Prompting | Includes example inputs/outputs before a new user input |
Zero-Shot CoT | Injects reasoning even when only one prompt is given, to improve QA decisions |
Compare-and-Decide | GPT generates multiple options and chooses or explains the best one |
Reflect-and-Revise | GPT critiques its own previous output and offers a revised version |
🧩 Test-Aware Custom Patterns (Learn/Teach/Test Yourself)
Pattern | Purpose |
---|---|
Bug Report Synthesizer | Turns freeform bug info into structured defect templates |
Test Oracle Generator | Generates expected results for given test inputs (especially for AI testing) |
Spec-to-Test Translator | Converts requirements into test cases using traceable logic |
Test Gap Identifier | Identifies what’s missing from a current test set based on inputs or past defects |
Here are the examples:
1. Cognitive Verifier
Example:
User: Generate test cases for the payment module.
GPT: What type of payment (credit card, bank transfer)? Is it web or mobile? Should I include edge cases like expired cards?
Then, generate grounded test cases based on your answers.
2. Prompt Rewriter
Example:
User: Write test cases.
GPT: Would you like a refined version? → “Write 5 boundary value test cases for a login form with email and password inputs.”
3. Audience Persona
Example:
User: Explain test coverage.
GPT: Who is this for — developer, PM, junior tester?
Then tailors the response based on the role.
4. Clarification Loop
Example:
User: Generate test data.
GPT: For which field types? Are there constraints (e.g., date range, format)?
Then proceeds with specific generation.
5. Prompt Debugger
Example:
User: Why is this prompt giving bad test cases?
GPT: Explains prompt ambiguity and suggests a clearer version (e.g., missing persona, format, or artifact).
6. Role-Task-Format (RTF)
Example:
Act as a Test Automation Engineer.
Create test cases for login functionality.
Show as a table with columns: Test ID, Description, Steps, Expected Result.
7. Scenario Coverage
Example:
User: Generate test cases for password reset.
GPT: Breaks it into: positive cases, negative cases, edge conditions, token expiration, API validation, and UI flow.
8. Test Matrix Builder
Example:
User: Build test combinations for login.
GPT: Constructs a matrix: rows = input conditions, columns = platforms (Web, iOS, Android).
9. Parametric Test Expansion
Example:
User: Expand this test case across browsers.
GPT: Generates versions for Chrome, Firefox, Edge, Safari using same test logic.
10. Gherkin Formatter
Example:
User: Create a test case for login.
GPT: Outputs in Given–When–Then format.
11. Root Cause Explorer
Example:
User: My UI test keeps failing randomly.
GPT: Asks about flakiness, logs, selectors, network issues, etc.
Then narrows down likely root causes.
12. Failure Pattern Analyzer
Example:
User: Analyze my recent test failures.
GPT: Summarizes patterns — e.g., 40% failures in auth module, 25% from flaky locators.
13. QA Strategy Coach
Example:
User: I’m launching a new mobile app. Help plan QA.
GPT: Asks about release schedule, devices, team, coverage goals.
Then returns a tailored strategy.
14. Risk-Based Filter
Example:
User: Prioritize my test cases.
GPT: Applies filters based on change history, usage frequency, past bugs, and criticality.
15. Template Enforcer
Example:
User: Output test cases.
GPT: Responds using only a predefined table format.
16. Schema-Aware Generator
Example:
User: Create JSON test data.
GPT: Validates output against a schema and only produces compliant results.
17. Instruction-First Format
Example:
System Prompt: Always return test cases in Gherkin.
User: Write tests for password reset.
GPT: Automatically uses Given–When–Then format.
18. Chain-of-Thought (CoT)
Example:
User: Why is this test failing?
GPT: Thinks step-by-step: input → environment → execution → result → conclusion.
19. Few-Shot Prompting
Example:
User: Here are two test cases. Generate more in the same style.
GPT: Follows format and structure for consistent outputs.
20. Zero-Shot CoT
Example:
User: What might cause this flaky test?
GPT: Thinks through three causes even without prior examples.
21. Compare-and-Decide
Example:
User: Which of these two test plans is better?
GPT: Compares and recommends with reasoning.
22. Reflect-and-Revise
Example:
User: Improve this test suite.
GPT: Critiques the current version and suggests revisions.
23. Bug Report Synthesizer
Example:
User: Here’s what I saw in the app…
GPT: Outputs structured bug report: title, steps to reproduce, expected vs actual.
24. Test Oracle Generator
Example:
User: What’s the expected output for input X?
GPT: Calculates and explains the expected result logically.
25. Spec-to-Test Translator
Example:
User: Here’s a requirement.
GPT: Converts it into traceable test cases with IDs and coverage links.
26. Test Gap Identifier
Example:
User: Here are my current test cases.
GPT: Analyzes them and suggests missing scenarios based on feature behavior.
Want this as a printable reference guide, live demo script, or to tag them in your promptlib
CLI? Let me know.
Tip 18 – Building Custom GPTs
A. The Basics: Provide clear structured instructions
Goal: Learn to design GPT instructions that produce consistent, high-quality QA outputs, reducing back-and-forth and ambiguity.
Before
You are a member of the QA team responsible for answering questions using information in the attached Test Plan and Requirements Specification documents. Your goal is to significantly reduce the frequency of directing engineers to reach out to QA leads through qa@shadeofhue.com, but you will instruct the user to do so if you are unable to find the answers in the provided documents. Some questions may involve industry standards or external references (e.g., ISO 29119 test documentation requirements, OWASP security testing guidelines), so you can also use web browsing to get those answers.
After
***Context: You are a member of the QA team. Attached are the Test Plan and Requirements Specification documents.
***Instructions:
- If the user’s question is answered in the provided documents, respond using that information.
- If the user’s question involves external standards or references (e.g., ISO 29119, OWASP testing guidelines), use web browsing to find accurate, up-to-date information.
- If the user’s question cannot be answered with the above steps, tell them to email qa@shadeofhue.com.
Optimized for AI-Assisted Test Case Generation & Defect Investigation Example
***Context: You are a QA automation assistant. Attached are the Test Plan, Requirements Specification, and Defect Log.
***Instructions:
- If the request involves generating test cases, derive them from the requirements, acceptance criteria, and relevant sections of the Test Plan.
- Include positive, negative, boundary, and edge cases, with clear preconditions, steps, and expected results.
- If the request involves defect investigation, analyze the Defect Log for patterns, affected features, and reproduction steps.
- Where needed, reference external QA standards or guidelines (e.g., ISO 29119, OWASP, ISTQB) via web browsing to ensure completeness and accuracy.
- If information is missing or ambiguous, clearly state your assumptions and/or ask for clarification before proceeding.
- If you cannot address the request with the above steps, direct them to email qa@shadeofhue.com.
B. Building your GPTs—Crafting structured instructions
1. Context: Specifies the role, domain, and purpose the GPT should follow to fulfill its task.
2. Instructions: Step-by-step guidance for exactly what the GPT should do and how.
3. Response Format: Specify the output format (e.g., markdown table, JSON, bullet list).
4. Example (optional): Give a realistic input/output sample for predictable results.
Pro-Tip: Add “confirm with the user after each step” when you want detailed, iterative responses.
Good Practices: S.C.O.P.E.
- S: Steps—Breakdown the instruction, tasks, and process into logical flow
- C: **Context—**Specifies the role, domain, and purpose the LLM should follow to fulfill its task.
- O: Output Format—Specify table, JSON, bullet points, etc.
- P: Persona – Define tone, style, or expertise level for target audience
- E: Examples – Few-shot prompting to set expectations
Other tips:
- Use delimiters (
"""
, **, etc.) to separate inputs. - Include ample context for relevance.
- Specify output length or detail level.
- Always state assumptions when data is incomplete.
Example: Building GPTs for QA Test Case Generation
Context:
You are a QA automation assistant. Your role is to generate comprehensive, high-quality test cases based on the provided Test Plan, Requirements Specification, and any attached acceptance criteria or user stories. You must follow industry best practices and relevant testing standards (e.g., ISTQB, ISO 29119, OWASP).
Instructions:
- Analyze the source material (requirements, acceptance criteria, related artifacts).
- Generate test cases covering:
- Positive scenarios
- Negative scenarios
- Boundary value tests
- Edge cases
- Security and performance considerations where applicable
- Each test case must include:
- Test ID
- Preconditions
- Test steps (each step on a new line)
- Expected results
- Use guidelines in the [uploaded Knowledge files] when applicable.
- Reference relevant standards where applicable.
- If information is missing or ambiguous, clearly state assumptions or request clarification.
Response Format:
Test ID | Description | Preconditions | Steps | Expected Result | Notes |
---|
Example:
Request:
Generate test cases for the login feature based on the following acceptance criteria:
- Valid username and password allow access to dashboard
- Invalid credentials show an error message
- Account locks after 5 failed attempts
Response:
Test ID | Description | Preconditions | Steps | Expected Result | Notes |
---|---|---|---|---|---|
TC-001 | Login with valid credentials | User account exists with valid username/password | 1. Navigate to login page2. Enter valid username/password3. Click “Login” | User is redirected to dashboard | Positive test |
TC-002 | Login with invalid credentials | User account exists | 1. Navigate to login page2. Enter invalid username/password3. Click “Login” | Error message is displayed | Negative test |
TC-003 | Account lock after 5 failed attempts | User account exists | 1. Navigate to login page2. Enter wrong password 5 times3. Attempt to login again | Account is locked | Security check |
Reference:
Exercise: Build a Custom GPT for Generating Test Cases
Learning Goals
By the end of this exercise, you will:
- Apply the S.C.O.P.E. framework to craft GPT instructions for QA.
- Produce test cases that are structured, complete, and consistent.
- Handle ambiguities explicitly, like a professional QA engineer.
Timebox
- 5 min – Read the scenario
- 15 min – Draft your S.C.O.P.E. instructions
- 15 min – Run your GPT instructions & refine
- 10 min – Peer review and finalize
Scenario (choose one)
A) Login & Lockout
- AC1: Valid username+password → redirect to
/dashboard
. - AC2: Invalid credentials → inline error “Invalid username or password.”
- AC3: After 5 consecutive failed attempts, account locks for 15 minutes.
- AC4: “Remember me” keeps session for 7 days.
B) Cart & Checkout
- AC1: Add/remove items; quantities 1–99.
- AC2: Subtotal updates instantly; tax = 8.25%.
- AC3: Promo codes:
SAVE10
(10% off, ≥ $50); one code per order. - AC4: Out-of-stock item cannot proceed to payment.
C) Password Reset
- AC1: Reset link expires in 30 minutes.
- AC2: Password policy: 8–64 chars, 1 upper, 1 lower, 1 digit, 1 special.
- AC3: Success banner: “Password updated.”
- AC4: Throttle: max 3 reset emails per hour per user.
Deliverable 1 — Your S.C.O.P.E. Instruction Set
S — Steps
Break down the process the GPT should follow in logical order.
Example:
- Parse requirements & acceptance criteria.
- Identify Positive, Negative, Boundary, and Security test scenarios.
- Write each test case with ID, Preconditions, Steps, Expected Result, and Notes.
- Highlight any missing or ambiguous information under “Notes” with
[ASSUMP]
tags.
C — Context
Define the role, domain, and scope for the GPT.
Example:
You are a QA automation assistant generating structured test cases for a web application feature, using the provided acceptance criteria and industry best practices (ISTQB, ISO 29119, OWASP).
O — Output Format
Specify exactly how the GPT should format the answer.
Example:
Test ID | Title | Preconditions | Steps | Expected Result | Coverage Type | Notes |
---|
P — Persona
Define the style, tone, and expertise level.
Example:
Senior QA engineer — concise, risk-aware, and explicit about assumptions. Always uses consistent terminology and structured test case formats.
E — Examples
Provide one or more input/output samples to set expectations (few-shot prompting).
Example:
Input:
- AC: Valid login redirects to dashboard.
Output:
Test ID | Title | Preconditions | Steps | Expected Result | Coverage Type | Notes |
---|---|---|---|---|---|---|
LOGIN-AUTH-001 | Valid login redirects to dashboard | Active user exists | 1. Go to /login2. Enter valid username3. Enter valid password4. Click Login | Redirect to /dashboard | Positive |
Deliverable 2 — Guardrails
- Do not invent features or messages not in the requirements.
- Tag all assumptions with
[ASSUMP]
. - If ≥2 critical gaps are found, output a “Clarification Needed” section before generating test cases.
Deliverable 3 — Test It
- Put your S.C.O.P.E. instructions into GPT’s system/instructions box.
- Provide your chosen scenario’s ACs as the user prompt.
- Generate at least 8 test cases covering all relevant categories.
Deliverable 4 — Self-Check
- Covers Positive, Negative, Boundary, and any applicable Security/Rate-limit tests.
- Format matches exactly what you specified.
- Steps are clear and testable.
- All ambiguities are surfaced as
[ASSUMP]
notes.
Peer Review
Swap with a partner and check:
- Is the S.C.O.P.E. instruction complete and specific?
- Are all relevant coverage types included?
- Is the output schema followed exactly?
- Are assumptions visible and justified?
Example “Good” Output Snippet (Login & Lockout)
Test ID | Title | Preconditions | Steps | Expected Result | Coverage Type | Notes |
---|---|---|---|---|---|---|
LOGIN-AUTH-001 | Valid login redirects to dashboard | Active user exists | 1. Go to /login2. Enter valid username3. Enter valid password4. Click Login | Redirect to /dashboard | Positive | |
LOGIN-AUTH-003 | Account locks after 5 failed attempts | Active user exists | 1. Fail login 5 times with wrong password2. Attempt 6th login with valid password | Login blocked; account locked for 15 minutes | Boundary/Security | [ASSUMP] Lockout applies to username, not IP |
Tip 19 – Extracting Locators & Building Your POM
Here is a repeatable workflow to have GPT do the heavy lifting of extracting locators and building your POM. You’ll get step-by-step tasks with prompts you can paste. I’ll assume Selenium + Java POM, but I’ll note Playwright variants when it matters.
High-level Flow
- Crawl & inventory pages → 2) Extract elements → 3) Propose locator strategies → 4) Validate & harden → 5) Generate POM classes → 6) Smoke-check with code → 7) Export locator catalog → 8) Wire into CI check.
Step-by-step with Prompts
1) Define the rules (once per project)
Goal: Set ground rules so GPT doesn’t output garbage locators.
Prompt (paste once at the start of the session):
sql
CopyEdit
You are an automation architect. Follow these rules strictly:
- Prefer data-testid or data-qa if present, then id, then scoped CSS. Avoid absolute XPath and nth-child unless scoped to a stable parent.
- Never use utility CSS classes (e.g., Tailwind/Bootstrap utility classes) as primary selectors.
- Add explicit waits for visibility/attachment. No Thread.sleep.
- For each element, propose: primary locator, 1 fallback, and a rationale.
- Output a “Locator Quality” rating: Strong/Medium/Weak with justification.
- Generate maintainable POM: one class per page/component, private By fields, public methods.
- Include a smoke test snippet that uses the POM (open page, interact with 1–2 elements).
- Output a machine-readable locator catalog in JSON for CI checks.
(If Playwright: replace Selenium By
with Playwright page.locator(...)
and use built-in auto-waits.)
2) Crawl target pages (manual or Agent Mode)
Goal: Provide GPT with page URLs and scope.
Prompt:
mathematica
CopyEdit
Target app pages:
- <https://example.com/login>
- <https://example.com/dashboard>
- <https://example.com/settings>
Task:
1) For each URL, list key functional elements users interact with (inputs, buttons, nav links, forms, modals).
2) Do NOT invent elements. Only include what you can inspect or what I confirm.
3) Output a simple table per page: Element, Purpose, Visible Text (if any), Attributes observed (id, name, data-*, class).
(If you’re using Agent/Computer Use, ask it to open each page, expand menus/modals, then extract visible elements. Otherwise, you can paste the relevant HTML snippets.)
3) Propose locators (primary + fallback)
Goal: Turn discovered elements into robust locators.
Prompt:
mathematica
CopyEdit
For the following extracted elements (paste your element tables or HTML snippets), propose:
- primary locator (prefer data-testid > id > scoped CSS)
- fallback locator (CSS or XPath with text when appropriate)
- Locator Quality: Strong/Medium/Weak
- Rationale
Output format (markdown table):
| Element | Primary Locator | Fallback Locator | Quality | Rationale |
4) Harden & validate (reject weak locators)
Goal: Force GPT to improve weak ones.
Prompt:
vbnet
CopyEdit
Review your table. For any "Weak" or "Medium" locator:
1) Try scoping to a stable parent container (form#login, [data-testid='container'], role-based landmarks if present).
2) Replace utility class selectors with data-* or id when possible.
3) For text selectors (XPath), normalize-space and ensure exact match or safe contains() only when text is stable.
Regenerate the table. Only accept "Strong" or "Medium" with justification; eliminate "Weak" unless no alternative exists.
5) Generate POM classes (implementation-ready)
Goal: Produce POM code with clean methods.
Prompt:
sql
CopyEdit
Generate Selenium Java POM classes for the Login and Dashboard pages using the final locator table.
Requirements:
- Class per page: LoginPage, DashboardPage
- private final WebDriver driver; private final WebDriverWait wait;
- By fields for locators (primary and fallback kept in a private getter method if needed)
- Methods: login(user, pass), isLoggedIn(), navigateTo(section), etc.
- Use ExpectedConditions.visibilityOfElementLocated for waits.
- No Thread.sleep.
- Include a minimal smoke test snippet that uses both POMs.
Output: code blocks per class + a small JUnit/TestNG example.
(Playwright variant: page.locator()
with getByTestId
if available, rely on auto-wait, add expect
checks if using Playwright Test.)
6) Add a smoke test and fix breakages fast
Goal: Run and immediately catch broken locators.
Prompt:
sql
CopyEdit
Create a single-file TestNG smoke test:
- Opens Login page, performs login
- Asserts Dashboard key element is visible
- Logs out
- If a locator might be brittle, show a backup get() that tries primary then fallback.
Output a single Java file named SmokeTest.java.
7) Export a locator catalog (JSON) for CI
Goal: Keep a machine-readable record to detect DOM drift.
Prompt:
css
CopyEdit
Generate a JSON locator catalog covering all pages. Schema:
{
"page": "LoginPage",
"elements": [
{
"name": "usernameField",
"primary": {"type": "css", "value": "[data-testid='email']"},
"fallback": {"type": "id", "value": "email"},
"quality": "Strong",
"notes": "Prefer data-testid"
}
]
}
Produce this JSON for all pages we covered.
(Store this file in repo as locators.json
; you’ll use it in CI checks.)
8) CI: Locator drift check (optional but pro move)
Goal: Fail fast when markup changes.
Prompt (for GPT to generate a small checker script):
sql
CopyEdit
Generate a Java (or Node) script that:
- Reads locators.json
- Visits each page
- For each element, verifies that primary or fallback resolves to at least one visible element
- Outputs a report with pass/fail per element, and a summary exit code for CI
- Avoid sleeps; use configurable wait timeout
Name it: LocatorDriftCheck.(java|js)
Deliverable Examples
A) POM Getter with fallback (Selenium/Java)
java
CopyEdit
private WebElement get(By primary, By... fallbacks) {
try {
return wait.until(ExpectedConditions.visibilityOfElementLocated(primary));
} catch (TimeoutException e) {
for (By fb : fallbacks) {
try {
return wait.until(ExpectedConditions.visibilityOfElementLocated(fb));
} catch (TimeoutException ignored) {}
}
throw e;
}
}
B) Sample JSON (catalog slice)
json
CopyEdit
{
"page": "LoginPage",
"elements": [
{
"name": "usernameField",
"primary": { "type": "css", "value": "[data-testid='email']" },
"fallback": { "type": "id", "value": "email" },
"quality": "Strong",
"notes": "Prefer data-testid; id is stable per dev contract"
},
{
"name": "loginButton",
"primary": { "type": "css", "value": "[data-testid='login-btn']" },
"fallback": { "type": "xpath", "value": "//button[normalize-space()='Login']" },
"quality": "Medium",
"notes": "Text could change; prefer test id"
}
]
}
Guardrails to bake into every prompt
- “Never output absolute XPath paths.”
- “Avoid utility CSS classes and nth-child unless scoped to a stable parent.”
- “Prefer
[data-testid]
or[data-qa]
; if missing, recommend adding them.” - “Provide a Quality rating and rationale for each locator.”
- “Always include a smoke test snippet using the generated POM.”
Tip 20 – Prompts for Learning
- Game: Do this when you want to use prompts to create interactive learning exercises.
Example:
Create a game for me on the topic of test automation with Java Selenium.
You are the game master, and you will give tasks based on this topic. Each correct answer increase my score.
Ask me questions, and wait for my response before proceed to the next question. Make sure to give me feedback on how well I did.
After a set of number of rounds, give me a summary of my score and performance.
- Understanding: Before you start writing what I asked you, I want you to rate your understanding of my request out of 10. If your understanding is not 10 out of 10, then I want you to ask me all the questions you need me to answer so that you have a 10 out of 10 understanding.
- Save time Once you have taken into account all the information I have just given you, I want you to only answer “yes.”