Monday, November 3, 2008

Portfolio Assessment

Ch. 3 Portfolio Assessment, O’Malley

When we were first introduced to portfolios during one of the teacher inservices I tried it out without success. I didn’t know much about portfolios but I followed one of the simple forms with the smiley face (to show they liked their selected work) and a frowning face (to show dislike). Students were to circle one of them and explain why they felt the way they did about their selected piece. Since I missed the part where we get students to explain their choice, my biggest worry was in students picking their best work and not being able to say why it was their best work. And since I was the only one in our building doing the portfolios, and I didn’t know where to turn to for help, I dropped the whole thing. I don’t remember how far I went with the portfolios.
The book describes three different ways students can assess their work: documentation, explaining why they chose that piece as their best work, comparison of their prior piece to their recent piece to show improvement, and integration, by describing their improvements in general ways. The book also states that students need to know how their work will be evaluated and by what standards so that they can set goals to work for. I think this will help them to explain their choice of “best work”. The book also states that students be given examples of an exemplary piece and a not so exemplary piece. By looking at the exemplary piece students are to discuss what makes it a good piece and to come up with a criteria chart. I like the example that was given on page 40 that has the heading “What a Good Writer Can Do”. The first criteria states, “I can plan before I write,” which is stated in a positive form. I’d like to try using Portfolios with my students, and I ‘d like to start out with writing which I think will be simple. But I’d need more guidance I think.

Monday, October 27, 2008

Who is Given Tests

Making Assessment Practices Vaild for Indigenous American Students, S. Nelson-Barber & E. Trumbull
Who Is Given Tests by G. Solano-Flores

The two articel brought up many points about tests and ELL students. One thing that stood out for me from these two articles is that the tests do not address cultural validity, that test developers do not take into account the lives of the students. A student from a different culture may interpret a question differently than a student whom the test was developed for.

Did I read right that that simplifying the language on the tests can shorten the gap between ELL and non-ELL students? Another point that stood out for me was that there is no such thing as a “standard” language. By selecting one language as a standard benefits certain students and hurts the rest.

Monday, October 20, 2008

McNamara ch 5

Ch 5 Validity: testing the test

The chapter states that the test goes through validation procedures to make sure that it is testing what it is supposed to be testing, and if the test is relevant to the ability of the students. It was interesting to read that the test developer is often the one that checks for validity, if I’m not mistaken, but other test researchers may review the test data as well.
What I’d like more clarification on is the consequential validity. Is coaching much like teaching to the test? Or is it different because not all students in the classroom can be coached, and because of it the result of the test changes?

Tuesday, October 7, 2008

Authentic & Multiplism in Assessment

Designing Authentic Assessment, O’Malley & Pierce
Language Assessment Process: A “Multiplism Perpective, E. Shohamy & O. Inbar

Multiplism in language assessment looks at different ways a language is to be assessed: the purposes to assessment, defining language knowledge, and the types of instruments to elicit language knowledge. And both the readings say that assessment has to be valid in that the assessment and instruction are aligned. O’Malley included that thinking skills be incorporated into content validity. So if the purpose of the test is to find out if a student is able to interact in the target language, and how much, an interview or an oral test would be more valid than giving a written test. While reading this I thought of the time I was interviewed for the kindergarten position at the immersion school. One activity I had to perform was to do a quick write for 3 minutes. I knew the purpose was to see how literate I was in Yugtun.

One way to look at authentic assessment is that the assessment is aligned with instruction, or what is being taught is what should be assessed. The example I liked about authentic assessment is the way we interact with others is by listening, speaking, and sometimes reading notes. And that is how language should be assessed: production (writing, speaking) and receptive (listening, reading) skills combined. I was thinking of how comprehension was tested in the running records and how limiting it was, because aren’t we as immersion teachers supposed to ask a question in several different ways? If the student isn’t answering, does it mean that the student didn’t understand what was read or the question being asked? If it’s the question, do we mark down that the student didn’t comprehend what was read? What if that same student is able to answer questions during reading time (maybe not in so many words, or using props) or story time?

I would like to become more familiar with self-assessment, and how to incorporate it into kindergarten immersion students. One interesting comment I heard over the weekend was when an immersion teacher and a foreign language teacher stated that some kids aren’t confident that they can speak in their target language until they hear themselves speak in a recording. It made me wonder if some of my students feel that way.

Monday, September 29, 2008

McNamara chapter 2

Communication & Design of Language Tests, ch 2


This chapter takes us through the history of language testing. First was a discrete point testing where parts of a language were tested separately (vocabulary, grammar). Then came the skills testing where the skills (reading, writing, listening) were tested. Foreign students wishing to study abroad led to integrative tests where pronunciation, grammar, and vocabulary were tested but those were time consuming and expensive tests. By the 1970’s pragmatic tests, where language use was tested (how the learner integrates grammar, vocabulary, and context), and this was when cloze tests began. When communicative competence theory came out there was a change in the way testing was viewed. This view was that language use is different in different situations, and that knowledge of language was much more than knowing vocabulary or grammatical structures.
Today in our school we are using a language test that was when… in 1970’s? Was it ever revised? I think it’s about time our school looked into language assessments, but how would we capture how students use their new language? Right now the way I know a child is acquiring a language is through observation. Thinking about our Yugtun language heightens my interest in language assessment.

NCLB & English Language Learners

NCLB Act & ELL: Assessment & Accountability Issues

The issues with assessments and LEP that Abedi mention include:
1) no accuracy in AYP reporting of LEP students due to inconsistencies of LEP reporting or classifications within the state and/or districts.
2) LEP student population varies within states. Someone will have to explain more on this one. I’m having trouble grasping how different cultures and languages can affect the LEP reporting.
3) Number of LEP students s always changing. As students become proficient in English they are exited out of LEP status.
4) The academic achievement tests are not reliable measurements for LEP students because the tests are normed for “Native English speakers”.
5) Schools with high LEP student population have lower-baseline scores. It was interesting to find that the higher percentage of LEP there are, the higher the yearly increase becomes.
6) NCLB does not follow “compensatory model” where a subject with a higher score could compensate for a subject with a low score.

After reading about how NCLB is biased towards English speakers, what are some of the “steps to remedy issues” that Abedi concluded with?

Monday, September 22, 2008

Moving Toward Authentic Assessment

Moving Toward Authentic Assessment

The first chapter on Authentic Assessment for English language learners got me curious to find out more about it, and it looks like something that will fit into our immersion program. I like the way assessment is tied into the way we teach, in that in order to adapt authentic assessment we must change our philosophy of teaching (pg. 5). This chapter reminds me of the critical pedagogy we spoke about in our second language teaching class. There are three models: transmission, generative, and transformative. Transmission is a perspective where the teacher holds all knowledge and the student is the receiver, and this model wouldn’t fit into authentic assessment because it doesn’t leave room for self evaluation. The generative model is a perspective in which the child is responsible for learning and the teacher acts as a guide, and transformative involves the community.
The scoring guide that is described in page 5 reminds me of our phase assessments. The students are graded basic, proficient, or advanced in each indicator. I’m curious now as to how phases came about, and are they for English first language speakers? I wonder, are the village schools taught as if all the students are English first language speakers? Is that why most villages aren’t doing so well in the standardized tests?