Assessment is a complex process that all teachers must do.  Many times we assess students without even thinking about it.  It just comes natural to ask students what they learned, or what they think about something.  The bigger issue is formalized assessment and seeing the big picture in regards to a student’s educational progress.  These assessments need to be rigorously evaluated based on numerous criteria to ensure they are both relevant and accurate.  Sometimes understanding what the results actually mean is the hardest part of giving a test.

Arizona uses the Arizona English Language Learner Assessment (AZELLA) test in order to determine the level of proficiency that an ELL student has.  There are multiple AZELLA tests which are intended for segments of grade levels which are determined by the age of the student.  A student who is 8 years old will be considered a 3rd grader and be administered a 3rd grade level AZELLA test (“Arizona english language”).  The test score then determines if they are pre-emergent, emergent, basic, intermediate, or proficient in their grade level in regards to their command of the English language (“Arizona english language”).

As with all subjective tests, the AZELLA test is sometimes criticized for not having accurate cut-off points.  The UCLA commissioned a study through Arizona State University in July of 2010 which argues that the AZELLA test over identifies kindergarten age students and under identifies older students (Florez).  The author of the study concludes that there is no publicly available empirical evidence to show that AZELLA can “accurately differentiate between a student that needs English language support and those who do not” (Florez, 16).

The purpose of ELL instruction is to bring non-native speakers to the level of proficiency that is sufficiently comparable to native speakers their same age.  Therefore, any “objective” measure would have to be done by having non-ELL students take the AZELLA test and see what their scores are.  The non-ELL student scores would need to be segregated by their current English related grades.  For example, fourth grade non-ELL students who are currently “C” students in English will range certain scores on the AZELLA test.  ELL student scores can then be compared to see if they are an equivalent fourth grade “C” student.

As long as there is a clear normal distribution from “A” students to failing students in AZELLA scores then the test is useful for determining which non-native speakers need ELL instruction.  If even a failing native speaker is “proficient” by AZELLA standards then there is something horribly wrong with the test and it should be thrown out.  Students at a “C” level and below could receive mandatory ELL instruction while “A” and “B” equivalent students could have the option.  There is little point in nitpicking to find the “perfect” cut-off points because they do not exist.

If a non-native speaker who scored in the same range on the AZELLA test as native speaking students their same age who are currently “B” English students goes into the regular classroom and receives less than a “B” then they could be put into ELL instruction.  The goal is simply to have two subjective measures come back with the same result.  If they do not, then you correct with additional instruction.  It is simply impossible to predict with 100% certainty what will happen.  There are many reasons why students do not live up to expectations that have nothing to do with their level of knowledge or intelligence.

The key problem of the objection to AZELLA based on cut-off scores as well as the single question “test” that is administered before even considering a student for ELL instruction is that they try to avoid asking the obvious question: “Would you like some help?”  Teachers can ask older students if they would like some extra help with the English language use.  They can ask younger students if they feel that ELL instruction is too easy for them.  Is it so offensive to people to offer help that we have to ask “is English the primary language of your child” instead?

Rick Stiggins and Jan Chappuis put together a report that brings this concept of “human-involvement” into assessment.  The “we know what is best” attitude is so pervasive in elitist circles that we have managed to go 60 years with standardized tests that have done nothing but fail our students because they do not take them into account.  There are four key conditions that must be met in order to reduce achievement gaps: focus on clear purposes, provide accurate reflections of achievement, provide students with continuous access to descriptive feedback on improvement in their work, and bring students into the classroom assessment process (Stiggins, R., & Chappius, J.).  The main shift is from using computers and hard numbers to assess students to involving the students in every step of the process.

Assessment with a clear purpose that is focused on driving learning rather than simply making a record of it is a way to significantly improve performance scores.  “For” assessment answers three key questions: “where am I,” “where am I going,” and “how do I get there.”  It directs students and gives them clear guidance.  Assessing what they have learned is useful for grades and progress reports but students need more than that to improve.  “OF” assessment is like taking a child’s temperature while “FOR” assessment is the medicine that fixes any problems.

Assessment must also come from clear and appropriate expectations (Stiggins & Chappius).  While teachers should not teach the test, the test should come from what was taught.  Like a well written essay, students should be told what they are going to learn, be taught and then be assessed on what they learned.  In conjunction with this, assessment should reflect intended targets.  It should be forward thinking; not simply in the present.  The students should be aware of the big idea up front.  The teacher may be aware that the students are building a dog house but if the students see boring drills on hammering in nails they will lack motivation to work through all the stages necessary to complete the dog house.  The student should know what they are building at each step of the process.

And finally, the assessment needs to be given back to the student in a timely and understandable manner (Stiggins & Chappius).  If there is an issue in an area that is required for the next stage of the process, that issue should be resolved or it will simply compound and feel even more insurmountable to the student.  It is frustrating to students when they are not clear on something and have another assessment before the results of the first one come back and then end up making the same mistakes again.  Teachers should make a point of ensuring that key concepts that are repeated in assessments are addressed before each assessment.  If it is important enough to re-assess, it is important enough to review.

Assessment is a complex process that can be see as a finger in the air to see which way the wind is blowing or the sail that directs the wind where it needs to go.  Teachers cannot simply rely on black and white segments of numbers to dictate how they teach their students.  It is important for teachers to interact with the students themselves to determine what the numbers mean.  There are many reasons for students to score a certain level on any particular test and the test must be evaluated within the context of those reasons in order to accurately understand it.

References

Arizona english language learner assessment. (n.d.). http://www.ade.state.az.us/oelas/AZELLA/AZELLAFormAZ-2SummerTrainingWorkshopforSY2010-2011-PowerPointPresentation-Final.pdf

Arizona english language learner assessment. (n.d.). http://www.mpsaz.org/elad/servesource/infoppt/files/azella_prof_level_table.pdf

Florez, Ida. (2010). Do the azella cut scores meet the standards. http://civilrightsproject.ucla.edu/research/k-12-education/language-minority-students/do-the-azella-cut-scores-meet-the-standards-a-validation-review-of-the-arizona-english-language-learner-assessment/florez-azella-score-validation-2010.pdf

Stiggins, R., & Chappius, J. (2005, Winter). Using student-involved classroom assessment to close achievement gaps. Theory Into Practice, 44(1), 11-18.http://library.gcu.edu:2048/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=16696521&loginpage=Login.asp&site=ehost-live

 

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>