Skip to content

How to Create Level-Appropriate Reading Materials with AI

Finding a reading text that fits your learner's exact level, covers a topic they actually care about, and is the right length for a 30-minute lesson can take longer than the lesson itself. You browse ESL databases, adapt a news article that's slightly too hard, or settle for something close enough. It's one of those small frustrations that adds up across a week of teaching.

AI changes this, but only if you know how to ask. Typing "write a reading text for a beginner" rarely produces something you can use. The AI doesn't know what "beginner" means to you, and the result tends to creep up in complexity in ways that are hard to spot until your learner is struggling. This guide covers how to prompt AI for genuinely level-appropriate reading materials at every CEFR level, what to specify to get accuracy, and how to pair any generated text with exercises in a second prompt.

Getting the Right CEFR Level 

The CEFR framework, A1 through C2, is the international standard for describing language proficiency, and most AI models know it. But knowing what B1 means and reliably producing B1 text are different things. LLMs have been trained on huge amounts of texts, mostly fluent, sophisticated prose and we are trying to prime our model to mimic text for our desired level.

Mentioning CEFR and the level in your prompt tunes the model towards the type of texts and knowledge of levels it has learned in its training. To prime it further it can be beneficial to describe the level more specifically, e.g. in in terms of vocabulary range, sentence structure, topic familiarity, and text length. Here's is what the CEFR levels may look like if adding more details.

  • A1 (Breakthrough): Very short sentences, present tense, everyday vocabulary like house, eat, go. Topics are immediately personal: family, food, daily routine. No subordinate clauses.

  • A2 (Elementary): Simple past and future tenses appear. Sentences connect with and, but, because. Topics expand to travel, shopping, simple preferences. Around 1,500 words of active vocabulary.

  • B1 (Intermediate): Coherent paragraphs with a mix of tenses. Connectors like however and although appear occasionally. Learners can follow a narrative or explanation without specialised vocabulary. Topics can be less personal: news, culture, work.

  • B2 (Upper Intermediate): Nuanced argument and varied sentence structure. Idiomatic language and collocations. Topics include abstract ideas, current affairs, professional contexts. This is where authentic news articles begin to work.

  • C1 (Advanced): Complex clauses, passive constructions, precise vocabulary. Academic or professional register is possible. Topics can be specialised: law, medicine, economics.

  • C2 (Mastery): Near-native fluency. Texts indistinguishable from native-speaker material. At this level, authentic texts often work better than AI-generated ones.

Providing More Context and Details

The key to getting usable output with AI is generally specificity. Prime the AI in the right direction by including things like level description, the approximate word count, and any vocabulary or grammar constraints. This also allows you to adjust to your specific learner's level, which may be anywhere between just above the previous level to fully proficient at the level and maybe somewhat above, but may also include specific knowledge or deficits from other levels.

Besides level knowledge it is also relevant to include information about the intended audience and a topic. It allows you to adjust to the learners' specific interests and goals, but it may also implicitly prime it towards vocabulary that is more likely known and texts that make more sense to the audience.

Here are four prompts for four levels that you can copy, adapt and use and experiment with.

A1 Example (young learner, French)

Write a short reading text in French for a language learner at A1 level.
The learner is 10 years old and has been studying French for 3 months.
The text should be 80-100 words long, about a child's morning routine.
Use only present tense.
Keep sentences to 5-8 words.
Use only high-frequency vocabulary a beginner would know (body parts, food, household objects, simple actions like eat, drink, wake up, go).
Do not use any subordinate clauses or words that would appear in a B1 French textbook.

B1 Example (adult learner, English)

Write a reading text in English for an adult language learner at B1 level.
The text should be 200-250 words about working from home: the benefits and one or two challenges.
Use a mix of past simple and present simple tenses.
Include connectors like "however" and "as a result" once each.
Avoid idioms, academic vocabulary, and technical language.
All words should be in the first 3,000 most common English words.
Write in three short paragraphs.

B2 Example (professional learner, German)

Write a reading text in German for a B2 learner who works in marketing.
The text should be 280-320 words about how social media has changed customer communication for businesses.
Use varied sentence structures, including one passive construction.
Include one or two collocations natural to marketing German (e.g., "Zielgruppe ansprechen", "Kampagne schalten").
Avoid C1-level vocabulary, academic register, and subordinate clauses more than two levels deep.
Write in four paragraphs with a short conclusion.

C1 Example (adult learner, English)

Write a reading text in English for a C1 learner preparing for a professional context in healthcare.
The text should be 350-400 words about the ethical challenges of AI in medical diagnosis.
Use complex sentence structures including passive voice, hedging language ("it could be argued that", "there is reason to suggest"), and vocabulary from academic and medical registers.
Include one abstract noun phrase construction.
Write as a balanced opinion piece with two distinct perspectives.

Language Instructions and Model Understanding

Notice the pattern across all four prompts in the previous section. The level label alone is supplemented with concrete constraints like word count, tense, sentence length, vocabulary range, topic, and register. The more you translate your pedagogical instincts into explicit instructions, the closer the output will be to what you'd write yourself.

Models that "think" may be better in understanding the instructions, but be aware that generally they don't understand the instructions as humans do, and to a large extent we are still just priming the models. They are typically good at getting the word count right, but don't expect it to be very exact, more just a general indication of length. When we add "negative" instructions they can potentially prime some models in the direction we don't intend. For instance, the instruction "Do not use any subordinate clauses or words that would appear in a B1 French textbook" could potentially increase the likelihood for some models to produce something that has subordinate clauses or B1 level words.

Experiment with your prompts and get a feeling for how your preferred AI chatbot reacts to the instructions and requests you give.

Prompt Experimentation

If you get results that you are not satisfied with, a first action may be to just ask it again. Most AI chat bots are not deterministic, so just asking again will produce another result. If the second result works for you, just use that. If the second response also doesn't work, try to see if you can spot what you don't like about the two outputs you got and be more specific about that part. For instance, if you didn't provide a topic and you continue to get weird topics then try specifying one yourself. You can even try to prompt the model first to give you a list of topics that may be relevant to your audience and then provide some information about your audience.

If the model consistently do something differently than you have instructed it to then try to adjust the instruction on that part. Make the instruction simpler if possible, move it to the start or the end, and/or add some markers to indicate the importance. For instance, if you wrote something like in the example above "The text should be 200-250 words about working from home: the benefits and one or two challenges" and it keeps responding with something half or twice the size, then look if there are other parts of the prompt that indicate different sizes and remove those. If not, then consider splitting the word count out on its own, make it a simple number, and put at the end of the prompt like "**Important**: The text must be 200 words." For some thinking models, you may also provide an explicit checklist it can consult.

If the instruction that it does not seem to follow is a negative instruction then try to remove it or change it to a more positive instruction. For instance, in the case of the instruction to not use subordinate clauses and B1 words, try to change it to "write simple sentences and use only words that will appears in A1 textbooks" instead.

Bonus: From Text to Exercise in One Follow-Up Prompt

Once you have a text you're happy with, generating exercises is a single follow-up. Keep the conversation open in the same chat thread so the AI has context and try something like the following:

Using the text above, create the following exercises:
- Five comprehension questions (mix of factual and inferential)
- A vocabulary matching activity with 6 words from the text paired with definitions appropriate to the same level
- One discussion question the learner can answer after reading
Format them clearly so a learner could work through them independently.

You may adjust the exercises you request to the right level of your learners. For lower levels, you might add "keep question language simple, at the same level as the text." For C1, you might ask for one critical thinking question that requires the learner to evaluate an argument.

If the AI chat bot you use does not seem to remember the text from earlier, you can write something like "Using the text below...." and then paste the text in at the bottom of yoiur prompt.

What to Check Before You Share

AI-generated texts need a read-through before they reach learners. This isn't about fundamental distrust — it's the same quality check you'd apply to a worksheet from any source. A few specific things to look for.

Vocabulary drift is the most common failure. A text labelled A1 might use a words above the level in an otherwise simple paragraph. Skim for outliers and either ask the AI to replace them or edit them yourself.

Cultural assumptions crop up more often in topics like holidays, food, and social norms. A text about "a typical family dinner" may assume a cultural context that doesn't match your learner's. Worth a quick check if cultural relevance matters for your class.

Factual plausibility matters for any text about real-world topics. AI can produce a statistic or place name that sounds right but isn't. For B2 and above texts on current affairs or professional topics, verify anything specific before it reaches a learner.

The overall review takes two or three minutes. Think of the AI as a first draft that's 80-90% there and your editorial eye closes the gap. You can also experiment with instructing the AI to do the check and report issues, if you're up for it, but there will always be a risk that it doesn't catch everything you would.

Putting It Into Your Workflow

The fastest approach is to build a library of prompt templates tailored to your common learner types. If you regularly teach intermediate adults in professional contexts, save your B1 and B2 templates with the details already filled in (profession, typical topics, preferred length). Adapt the topic each week and you're generating a usable draft in under a minute.

For teachers who use tools to distribute materials to learners, platforms like Edumo let you generate text and related interactive exercises your learners can complete on their phones, with progress tracking so you know who's done it before the next session.

The search-and-adapt approach had its place when AI couldn't do better. Now the bottleneck isn't finding a text, it's knowing how to ask for the right one.  

 

If you want to see how this looks end-to-end — from generating a text to distributing interactive exercises — try Edumo free.