Most ChatGPT prompts for language teachers ignore one of the most powerful pieces of context available: the learner's first language. By telling AI what language your student speaks, you unlock contrastive error prediction, false friend exercises, and vocabulary lists with L1 support and may produce materials that are dramatically more targeted than generic prompts allow.
|
What this post covers
|
Including the learner's L1 in your prompts transforms generic AI output into targeted materials that address the specific errors and vocabulary gaps your learner will actually face — based on how their native language differs from the target language.
When you ask ChatGPT to generate a vocabulary exercise "for a B2 learner," it produces something generic and middle-of-the-road. That's not necessarily bad, but it's a missed opportunity. The teacher's value lies in knowing context and the learner's first language is one of the most useful pieces of context you can give an AI.
Language transfer, the influence of a learner's L1 on their second language acquisition, shapes errors in predictable ways. A learner whose L1 has no articles (Russian, Japanese, Polish) will predictably struggle with "a" vs. "the." A learner whose L1 is Spanish will have rich vocabulary overlap with English but will be blindsided by false friends. A German speaker will often over-apply formal register rules.
These are patterns you can work with and when you hand that context to an AI, the output may get dramatically more targeted.
Modern second language pedagogy increasingly recognizes that L1 is a resource, not a problem. The field of translanguaging treats learners' full linguistic repertoire as an asset. The question isn't whether to use L1 in teaching — that's a judgment call that belongs to you — but whether you can use AI to make L1-informed materials faster than you could before.
By asking ChatGPT for a contrastive analysis between your learner's L1 and the target language, you can surface the specific grammar, vocabulary, and pragmatic errors they're most likely to make — and address them proactively in your lesson.
An underused application of AI in language teaching is contrastive analysis: comparing two language systems to predict where learners are likely to struggle. You don't need to be a linguist to do this. You just need to tell ChatGPT what you're looking for.
Prompt: Contrastive error analysis
You are an experienced ESL teacher and linguist.My learner's L1 is Spanish. They are learning English at a B1-B2 level. The topic of our next lesson is business writing, specifically formal emails. Based on the structural and lexical differences between Spanish and English:List 5-7 errors this learner is likely to make in this context. For each error, explain why a Spanish speaker would make it and suggest a brief corrective example for each.
Format: a table with columns for Error Type, Example Error, Why It Happens, Corrected Version.
The output from a prompt like this may not be perfect. AI can hallucinate linguistic explanations, so treat it as a starting point to verify against your own knowledge. But it's a fast way to surface patterns you might want to address proactively, before your learner ever makes the mistake.
This works for grammar, but also for pragmatics. Spanish-speaking learners of English, for example, often write emails that sound too direct or too indirect by English conventions. Not from rudeness, but from transferring Spanish epistolary norms. Ask the AI to explore that, and you'll get material you can build an entire lesson around.
AI is genuinely good at generating false friend exercises because the underlying linguistic data is well-represented in its training — and these exercises are among the most memorable for learners, because the surprises stick.
For learners whose L1 shares roots with the target language, false friends are one of the most persistent sources of error — and one of the most memorable things you can teach. The Spanish word embarazada doesn't mean embarrassed; it means pregnant. German Gift doesn't mean gift; it means poison. These surprises tend to lodge in memory far better than ordinary vocabulary.
AI is genuinely good at generating false friends exercises because the underlying data is well-represented in its training. The key is to make the prompt specific to the language pair.
Prompt: False friends exercise
You are an ESL materials designer. Create a false friends exercise for a Spanish-speaking learner of English at B1 level.
Include 8 false friend pairs (Spanish word / English lookalike). For each pair:
- Give the Spanish word and its actual English meaning
- Give the English lookalike word and its actual English meaning
- Write a sentence using the English word correctly
- Write a sentence showing how a Spanish speaker might misuse the English word by transferring the Spanish meaning
Format as a worksheet a learner could complete independently.
Cognates can also help learners, specifically at lower levels, see how much vocabulary they already have access to — because of L1 overlap. Again AI
Prompt: Cognates exercise (positive transfer)
You are an ESL materials designer. My learner's L1 is Spanish. They are learning English at A2-B1 level.
Create a vocabulary warm-up exercise that uses Spanish-English cognates to build confidence.
Include 10 cognate pairs where the Spanish and English meanings are genuinely similar.
For each, write: the Spanish word, the English cognate, a short example sentence in English.
Note at the bottom: 2-3 false friends to watch out for that look similar to cognates but aren't.
You can run the same prompts for German-English, French-English, or any pair where cognates and false friends are in play. For language pairs with less overlap like Korean-English, for instance, shift the focus to phonological interference or grammar transfer instead.
It depends on the learner's level — L1 translations reduce cognitive load for A1-A2 learners and speed up initial word learning, while higher-level learners benefit more from target-language-only definitions with L1 notes reserved for the teacher.
Including L1 translations in vocabulary exercises is a genuinely debated judgment call. Some teachers avoid it to maintain target language immersion; others use it strategically, especially at lower CEFR levels where cognitive load is high. AI can generate vocabulary lists with or without L1 support, and it can do it well if you're specific.
Prompt: Vocabulary list with L1 support
You are an ESL teacher preparing materials for a beginner (A1-A2) learner. My learner's L1 is German. The topic is household vocabulary.
Create a vocabulary list of 15 words. For each word include:
- The English word
- The German translation
- A short, simple example sentence in English (maximum 8 words)
- One note flagging if the German translation is a false friend or has a narrower/broader meaning than the English word
Keep the German translation brief — one word or short phrase only.
For higher-level learners, you might strip out the translation column and use L1 only in the teacher notes, as a reference for anticipating questions. The prompt can reflect that too:
Create a B2-level vocabulary list on the topic of environmental policy for a French-speaking learner of English. Do not include French translations in the learner-facing list.
At the bottom, add a teacher note in English only: flag any terms where French interference is likely (false friends, different register, etc.) so I know what to address in class.
When AI generates content in a language you don't speak, you lose the ability to verify it — but you can mitigate this by using your learner as a quality checker and by adding verification notes to any L1-generated materials.
One important caveat: when AI generates content in a language you don't speak, you lose the ability to verify it. If you're teaching Spanish but don't speak Spanish, the L1 translations and false friend examples in AI-generated materials might be wrong — plausibly wrong, not obviously wrong.
There are two ways to handle this. First, use your learner as a quality checker. Asking a learner to spot errors in their own L1 is a legitimate learning activity in itself, and it distributes the verification burden naturally. Second, add a note to any AI-generated materials that includes L1 content: "Check translations with your learner before use." The Edumo AI Guide Chapter 3 covers this further including the principle that strategic L1 use should bridge toward target language practice, not replace it.
The underlying point is that AI is a fast generator, not an expert. The expertise is yours. These prompts work best when you're in a position to review the output critically, but may also work when not if addressed appropriately.
To summarize the practical logic here: the more context you give an AI about who your learner is, the better the output. L1 is a useful piece of that context. It shapes errors, surfaces useful contrastive exercises, and opens up a category of vocabulary work like false friends and cognates that is genuinely useful and often underexplored.
None of this requires linguistic expertise beyond what most experienced teachers already have. You may know your learners' L1 patterns. You may know where they struggle. These prompts are a way to turn that knowledge into materials faster than you could build them manually.
Start with one: pick the contrastive error analysis prompt, run it for a learner you know well, and see how the output compares to what you'd have predicted yourself. That comparison is useful — it'll tell you where AI is helpful, where it's wrong, and where it's given you something you hadn't thought of.
If you want to go deeper on using learners' L1 strategically in AI-generated materials, we cover it in Chapter 3 of our free AI guide. Or if you want to try generating L1-informed exercises directly with your learners, give Edumo a try.