News & Events

The Latest on CAAELP

We kicked off 2021 by releasing the first edition of the CAAELP Insider newsletter! Click here to read the latest updates on the project’s status.

The Latest on CAAELP
Assessment

Assessment

The assessment is underway. To ensure that the assessment is valid, reliable, and fair, the CAAELP team is incorporating educators’ experience and expertise with English learners, English learners with disabilities, students with significant cognitive disabilities, language pathology, assessment, curriculum and instruction, and policy.

 With educators’ critical review and input, the CAAELP team is implementing a principled approach to assessment design. At this point, we have developed several foundational documents: the statement of purpose and intended uses, a general student population definition, domain definitions, a proficiency definition, intended score interpretations and uses, policy and range performance level descriptors, assessment model and reporting category documentation, and blueprints.

Research

The team is hard at work carrying out the pilot and field tests. The Alt ELPA pilot begins in Year 3 (2021–2022) of the grant and the field test will be in Year 4 (2022–2023). 

For the pilot, the team is doing a deep dive. They are gathering information related to the appropriateness of item types and the accessibility of items to the range of students who will be administered the Alt ELPA. How well the items and the test provide students the opportunity to demonstrate what they know and can do in English will also be explored.

Drawing on insights from educators familiar with ELs with the most significant cognitive disabilities and a representative sample of students, the pilot will inform the refinement of the items and test. Preparation for the field test involves a series of simulation studies that explore different item calibration and scoring models and potential conditions that might affect test results. Using knowledge and available information about ELs with the most significant cognitive disabilities and other alternate assessments, student and item data are being generated for our simulation studies.

Using the generated data, different item calibration and scoring models are applied to identify optimal methods and examine how foreseeable conditions (e.g., missingness in student response data due to domain exemptions) might affect test results. The results of the simulation studies will inform the field test design as well as item calibration and scoring models.

Research

Stay in Touch

Receive quarterly emails about the latest Alt ELPA updates and opportunities.

Sign Up