AI feedback can be seductive because it sounds confident. It will tell a student the response is clear, compelling, reflective, and well structured. Sometimes that is true. Often, for TJ essays, it is beside the point.
TJ writing is not evaluated like a generic school essay. The strongest responses do not merely sound good. They show evidence, reasoning, specificity, and student-level judgment under constraints.
The Polish Problem
AI tends to reward surface fluency. It likes smooth transitions, balanced sentences, and explicit lessons. That can make a response feel mature, but a mature-sounding paragraph is not automatically a strong TJ response.
For admissions writing, polish can hide the real weakness. A response may sound clean but still fail to answer the prompt, prove the claim, or reveal anything distinctive about the student.
The false positive
AI often says a response is strong because the wording is fluent. Human readers usually care more about whether the example is specific, credible, and responsive to the prompt.
Where AI Misses SPS Feedback
The Student Portrait Sheet is built around student experiences. A good SPS response usually has a concrete situation, a real challenge, a decision, an action, and a reflection that follows from the facts.
AI feedback often misses these admissions-level questions:
- Is this example specific enough that it could only belong to this student?
- Does the response show action, or does it only describe values?
- Does the reflection come from the story, or is it pasted on at the end?
- Is the student claiming leadership without showing what they actually did?
- Does the response sound like a real middle school student with real constraints?
AI can identify grammar issues. It is much weaker at spotting whether the story itself is strategically useful.
Where AI Misses PSE Feedback
The Problem-Solving Essay is even more vulnerable to misleading AI feedback. A PSE response is not just an explanation. It is a timed demonstration of math or science reasoning in writing.
AI may praise a PSE response because the conclusion is clear, even if the quantitative reasoning is thin. It may miss missing units, unsupported assumptions, weak variable definitions, or a shortcut that does not actually justify the answer.
For PSE prep, feedback needs to ask:
- Did the student define the quantities clearly?
- Did the student explain why the method fits the problem?
- Are calculations or scientific claims supported rather than asserted?
- Could another student follow the reasoning without guessing?
- Does the response address the actual prompt, not just a nearby concept?
That is why generic AI feedback can make families feel better while leaving the actual scoring problem untouched.
What Better Feedback Looks Like
Better feedback is more uncomfortable. It does not just say "great job" or "add more detail." It names the missing decision, the unsupported claim, the vague sentence, or the math step that needs to be justified.
A useful reviewer should be able to tell the student:
- which sentence is doing real work;
- which sentence is generic filler;
- what evidence the reader still needs;
- where the reasoning jumps too quickly;
- how to revise without losing the student's voice.
AI can be part of that process if the family knows its limits. But if the goal is TJ readiness, the standard is not "does this sound polished?" The standard is "does this reveal the student's thinking clearly enough for an evaluator to trust it?"
Strong TJ feedback protects the student's voice while making the thinking harder to ignore.
Want feedback that looks past polish?
Our TJ prep programs focus on SPS and PSE structure, reasoning, and revision, not generic essay smoothing.
Compare TJ Prep Options Join an Info Session