Monday, October 27, 2008

Who is Given Tests

Making Assessment Practices Vaild for Indigenous American Students, S. Nelson-Barber & E. Trumbull
Who Is Given Tests by G. Solano-Flores

The two articel brought up many points about tests and ELL students. One thing that stood out for me from these two articles is that the tests do not address cultural validity, that test developers do not take into account the lives of the students. A student from a different culture may interpret a question differently than a student whom the test was developed for.

Did I read right that that simplifying the language on the tests can shorten the gap between ELL and non-ELL students? Another point that stood out for me was that there is no such thing as a “standard” language. By selecting one language as a standard benefits certain students and hurts the rest.

Monday, October 20, 2008

McNamara ch 5

Ch 5 Validity: testing the test

The chapter states that the test goes through validation procedures to make sure that it is testing what it is supposed to be testing, and if the test is relevant to the ability of the students. It was interesting to read that the test developer is often the one that checks for validity, if I’m not mistaken, but other test researchers may review the test data as well.
What I’d like more clarification on is the consequential validity. Is coaching much like teaching to the test? Or is it different because not all students in the classroom can be coached, and because of it the result of the test changes?

Tuesday, October 7, 2008

Authentic & Multiplism in Assessment

Designing Authentic Assessment, O’Malley & Pierce
Language Assessment Process: A “Multiplism Perpective, E. Shohamy & O. Inbar

Multiplism in language assessment looks at different ways a language is to be assessed: the purposes to assessment, defining language knowledge, and the types of instruments to elicit language knowledge. And both the readings say that assessment has to be valid in that the assessment and instruction are aligned. O’Malley included that thinking skills be incorporated into content validity. So if the purpose of the test is to find out if a student is able to interact in the target language, and how much, an interview or an oral test would be more valid than giving a written test. While reading this I thought of the time I was interviewed for the kindergarten position at the immersion school. One activity I had to perform was to do a quick write for 3 minutes. I knew the purpose was to see how literate I was in Yugtun.

One way to look at authentic assessment is that the assessment is aligned with instruction, or what is being taught is what should be assessed. The example I liked about authentic assessment is the way we interact with others is by listening, speaking, and sometimes reading notes. And that is how language should be assessed: production (writing, speaking) and receptive (listening, reading) skills combined. I was thinking of how comprehension was tested in the running records and how limiting it was, because aren’t we as immersion teachers supposed to ask a question in several different ways? If the student isn’t answering, does it mean that the student didn’t understand what was read or the question being asked? If it’s the question, do we mark down that the student didn’t comprehend what was read? What if that same student is able to answer questions during reading time (maybe not in so many words, or using props) or story time?

I would like to become more familiar with self-assessment, and how to incorporate it into kindergarten immersion students. One interesting comment I heard over the weekend was when an immersion teacher and a foreign language teacher stated that some kids aren’t confident that they can speak in their target language until they hear themselves speak in a recording. It made me wonder if some of my students feel that way.