Jump to content

Educational assessment

From Wikipedia, the free encyclopedia

Educational assessment or educational evaluation[1] is the systematic process of documenting and using empirical data on the knowledge, skill, attitudes, aptitude and beliefs to refine programs and improve student learning.[2] Assessment data can be obtained by examining student work directly to assess the achievement of learning outcomes or it is based on data from which one can make inferences about learning.[3] Assessment is often used interchangeably with test but is not limited to tests.[4] Assessment can focus on the individual learner, the learning community (class, workshop, or other organized group of learners), a course, an academic program, the institution, or the educational system as a whole (also known as granularity). The word "assessment" came into use in an educational context after the Second World War.[5]

As a continuous process, assessment establishes measurable student learning outcomes, provides a sufficient amount of learning opportunities to achieve these outcomes, implements a systematic way of gathering, analyzing and interpreting evidence to determine how well student learning matches expectations, and uses the collected information to give feedback on the improvement of students' learning.[6] Assessment is an important aspect of educational process which determines the level of accomplishments of students.[7]

The final purpose of assessment practices in education depends on the theoretical framework of the practitioners and researchers, their assumptions and beliefs about the nature of human mind, the origin of knowledge, and the process of learning.

Types

[edit]

The term assessment is generally used to refer to all activities teachers use to help students learn and to guage student progress.[8] Assessment can be divided for the sake of convenience using the following categorizations:

  1. Placement, formative, summative and diagnostic assessment
  2. Objective and subjective
  3. Referencing (criterion-referenced, norm-referenced, and ipsative (forced-choice))
  4. Informal and formal
  5. Internal and external

Placement, formative, summative and diagnostic

[edit]

Assessment is often divided into initial, formative, and summative categories for the purpose of considering different objectives for assessment practices.

(1) Placement assessment – Placement evaluation may be used to place students according to prior achievement or level of knowledge, or personal characteristics, at the most appropriate point in an instructional sequence, in a unique instructional strategy, or with a suitable teacher[9] conducted through placement testing, i.e. the tests that colleges and universities use to assess college readiness and place students into their initial classes. Placement evaluation, also referred to as pre-assessment, initial assessment, or threshold knowledge test (TKT), is conducted before instruction or intervention to establish a baseline from which individual student growth can be measured. This type of assessment is used to know what the student's skill level is about the subject, it can also help the teacher to explain the material more efficiently. These assessments are generally not graded.[10]

(2) Formative assessment – This is generally carried out throughout a course or project. It is also referred to as "educative assessment," which is used to help learning. In an educational setting, a formative assessment might be a teacher (or peer) or the learner (e.g., through a self-assessment[11][12]), providing feedback on a student's work and would not necessarily be used for grading purposes. Formative assessments can take the form of diagnostic, standardized tests, quizzes, oral questions, or draft work. Formative assessments are carried out concurrently with instructions and the results may count. The formative assessments aim is to see if the students understand the instruction before doing a summative assessment.[10]

(3) Summative assessment – This is generally carried out at the end of a course or project. In an educational setting, summative assessments are typically used to assign students a course grade, and are evaluative. Summative assessments are made to summarize what the students have learned in order to know whether they understand the subject matter well. This type of assessment is typically graded (e.g. pass/fail, 0–100) and can take the form of tests, exams or projects. Summative assessments are basically used to determine whether a student has passed or failed a class. A criticism of summative assessments is that they are reductive, and learners discover how well they have acquired knowledge too late for it to be of use.[10]

(4) Diagnostic assessment – At the end, diagnostic assessment focuses on the whole difficulties that occurred during the learning process.

Jay McTighe and Ken O'Connor proposed seven practices to effective learning.[10] One of them is about showing the criteria of the evaluation before the test and another the importance of pre-assessment to know what the skill levels of a student are before giving instructions. Giving a lot of feedback and encouragements are other practices.

Educational researcher Robert Stake[13] explains the difference between formative and summative assessment with the following analogy:

When the cook tastes the soup, that's formative. When the guests taste the soup, that's summative.[14]

Summative and formative assessment are often referred to in a learning context as assessment of learning and assessment for learning respectively. Assessment of learning is generally summative in nature and intended to measure learning outcomes and report those outcomes to students, parents and administrators. Assessment of learning mostly occurs at the conclusion of a class, course, semester or academic year while assessment for learning is generally formative in nature and is used by teachers to consider approaches to teaching and next steps for individual learners and the class.[15]

A common form of formative assessment is diagnostic assessment. Diagnostic assessment measures a student's current knowledge and skills for the purpose of identifying a suitable program of learning. Self-assessment is a form of diagnostic assessment which involves students assessing themselves.

Forward-looking assessment asks those being assessed to consider themselves in hypothetical future situations.[16]

Performance-based assessment is similar to summative assessment, as it focuses on achievement. It is often aligned with the standards-based education reform and outcomes-based education movement. Though ideally, they are significantly different from a traditional multiple choice test, they are most commonly associated with standards-based assessment which use free-form responses to standard questions scored by human scorers on a standards-based scale, meeting, falling below or exceeding a performance standard rather than being ranked on a curve. A well-defined task is identified and students are asked to create, produce or do something often in settings that involve real-world application of knowledge and skills. Proficiency is demonstrated by providing an extended response. Performance formats are further classified into products and performances. The performance may result in a product, such as a painting, portfolio, paper or exhibition, or it may consist of a performance, such as a speech, athletic skill, musical recital or reading.

Objective and subjective

[edit]

Assessment (either summative or formative) is often categorized as either objective or subjective. Objective assessment is a form of questioning which has a single correct answer. Subjective assessment is a form of questioning which may have more than one correct answer (or more than one way of expressing the correct answer). There are various types of objective and subjective questions. Objective question types include true/false answers, multiple choice, multiple-response and matching questions while Subjective questions include extended-response questions and essays. Objective assessment is well suited to the increasingly popular computerized or online assessment format.

Some have argued that the distinction between objective and subjective assessments is neither useful nor accurate because, in reality, there is no such thing as "objective" assessment. In fact, all assessments are created with inherent biases built into decisions about relevant subject matter and content, as well as cultural (class, ethnic, and gender) biases.[17]

Basis of comparison

[edit]

Test results can be compared against an established criterion, or against the performance of other students, or against previous performance:

(5)Criterion-referenced assessment, typically using a criterion-referenced test, as the name implies, occurs when candidates are measured against defined (and objective) criteria. Criterion-referenced assessment is often but not always used to establish a person's competence (whether he/she can do something). The best-known example of criterion-referenced assessment is the driving test when learner drivers are measured against a range of explicit criteria (such as "Not endangering other road users").

(6)Norm-referenced assessment (colloquially known as "grading on the curve"), typically using a norm-referenced test, is not measured against defined criteria. This type of assessment is relative to the student body undertaking the assessment, It is effectively a way of comparing students. The IQ test is the best-known example of norm-referenced assessment. Many entrance tests (to prestigious schools or universities) are norm-referenced, permitting a fixed proportion of students to pass ("passing" in this context means being accepted into the school or university rather than an explicit level of ability). This means that standards may vary from year to year depending on the quality of the cohort; criterion-referenced assessment does not vary from year to year (unless the criteria change).[18]

(7)Ipsative assessment is self-comparison either in the same domain over time, or comparative to other domains within the same student.

Informal and formal

[edit]

Assessment can be either formal or informal. Formal assessment usually implies a written document, such as a test, quiz, or paper. A formal assessment is given a numerical score or grade based on student performance, whereas an informal assessment does not contribute to a student's final grade. An informal assessment usually occurs in a more casual manner and may include observation, inventories, checklists, rating scales, rubrics, performance and portfolio assessments, participation, peer and self-evaluation, and discussion.[19]

Internal and external

[edit]

Internal assessment is set and marked by the school (i.e. teachers), students get the mark and feedback regarding the assessment. External assessment is set by the governing body, and is marked by non-biased personnel, some external assessments give much more limited feedback in their marking. However, in tests such as Australia's NAPLAN, the criterion addressed by students is given detailed feedback in order for their teachers to address and compare the student's learning achievements and also to plan for the future.

Standards of quality

[edit]

In general, high-quality assessments are considered those with a high level of reliability and validity. Other general principles are practicality, authenticity and washback.[20][21]

Reliability

[edit]

Reliability relates to the consistency of an assessment. A reliable assessment is one that consistently achieves the same results with the same (or similar) cohort of students. Various factors affect reliability—including ambiguous questions, too many options within a question paper, vague marking instructions and poorly trained markers. Traditionally, the reliability of an assessment is based on the following:

  1. Temporal stability: Performance on a test is comparable on two or more separate occasions.
  2. Form equivalence: Performance among examinees is equivalent on different forms of a test based on the same content.
  3. Internal consistency: Responses on a test are consistent across questions. For example: In a survey that asks respondents to rate attitudes toward technology, consistency would be expected in responses to the following questions:
    • "I feel very negative about computers in general."
    • "I enjoy using computers."[22]

The reliability of a measurement x can also be defined quantitatively as: where is the reliability in the observed (test) score, x; and are the variability in 'true' (i.e., candidate's innate performance) and measured test scores respectively. can range from 0 (completely unreliable), to 1 (completely reliable).

There are four types of reliability: student-related which can be personal problems, sickness, or fatigue, rater-related which includes bias and subjectivity, test administration-related which is the conditions of test taking process, test-related which is basically related to the nature of a test.[23][20][24]

Validity

[edit]

Valid assessment is one that measures what it is intended to measure. For example, it would not be valid to assess driving skills through a written test alone. A more valid way of assessing driving skills would be through a combination of tests that help determine what a driver knows, such as through a written test of driving knowledge, and what a driver is able to do, such as through a performance assessment of actual driving. Teachers frequently complain that some examinations do not properly assess the syllabus upon which the examination is based; they are, effectively, questioning the validity of the exam.

Validity of an assessment is generally gauged through examination of evidence in the following categories:

  1. Content validity – Does the content of the test measure stated objectives?
  2. Criterion validity – Do scores correlate to an outside reference? (ex: Do high scores on a 4th grade reading test accurately predict reading skill in future grades?)
  3. Construct validity – Does the assessment correspond to other significant variables? (ex: Do ESL students consistently perform differently on a writing exam than native English speakers?)[25]

Others are:[20][23]

A good assessment has both validity and reliability, plus the other quality attributes noted above for a specific context and purpose. In practice, an assessment is rarely totally valid or totally reliable. A ruler which is marked wrongly will always give the same (wrong) measurements. It is very reliable, but not very valid. Asking random individuals to tell the time without looking at a clock or watch is sometimes used as an example of an assessment which is valid, but not reliable. The answers will vary between individuals, but the average answer is probably close to the actual time. In many fields, such as medical research, educational testing, and psychology, there will often be a trade-off between reliability and validity. A history test written for high validity will have many essay and fill-in-the-blank questions. It will be a good measure of mastery of the subject, but difficult to score completely accurately. A history test written for high reliability will be entirely multiple choice. It isn't as good at measuring knowledge of history, but can easily be scored with great precision. We may generalize from this. The more reliable our estimate is of what we purport to measure, the less certain we are that we are actually measuring that aspect of attainment.

It is well to distinguish between "subject-matter" validity and "predictive" validity. The former, used widely in education, predicts the score a student would get on a similar test but with different questions. The latter, used widely in the workplace, predicts performance. Thus, a subject-matter-valid test of knowledge of driving rules is appropriate while a predictively valid test would assess whether the potential driver could follow those rules.

Practicality

[edit]

This principle refers to the time and cost constraints during the construction and administration of an assessment instrument.[20] Meaning that the test should be economical to provide. The format of the test should be simple to understand. Moreover, solving a test should remain within suitable time. It is generally simple to administer. Its assessment procedure should be particular and time-efficient.[24]

Authenticity

[edit]

The assessment instrument is authentic when it is contextualized, contains natural language and meaningful, relevant, and interesting topic, and replicates real world experiences.[20]

Washback

[edit]

This principle refers to the consequence of an assessment on teaching and learning within classrooms.[20] Washback can be positive and negative. Positive washback refers to the desired effects of a test, while negative washback refers to the negative consequences of a test. In order to have positive washback, instructional planning can be used.[26]

Evaluation standards

[edit]

In the field of evaluation, and in particular educational evaluation in North America, the Joint Committee on Standards for Educational Evaluation has published three sets of standards for evaluations. The Personnel Evaluation Standards were published in 1988,[27] The Program Evaluation Standards (2nd edition) were published in 1994,[28] and The Student Evaluation Standards were published in 2003.[29]

Each publication presents and elaborates a set of standards for use in a variety of educational settings. The standards provide guidelines for designing, implementing, assessing and improving the identified form of evaluation. Each of the standards has been placed in one of four fundamental categories to promote educational evaluations that are proper, useful, feasible, and accurate. In these sets of standards, validity and reliability considerations are covered under the accuracy topic. For example, the student accuracy standards help ensure that student evaluations will provide sound, accurate, and credible information about student learning and performance.

In the UK, an award in Training, Assessment and Quality Assurance (TAQA) is available to assist staff learn and develop good practice in relation to educational assessment in adult, further and work-based education and training contexts.[30]

Grade inflation

[edit]
Grade inflation (also known as grading leniency) is the general awarding of higher grades for the same quality of work over time, which devalues grades.[31] However, higher average grades in themselves do not prove grade inflation. For this to be grade inflation, it is necessary to demonstrate that the quality of work does not deserve the high grade.[31]

Due to grade inflation, standardized tests can have higher validity than unstandardized exam scores.[32] Recently increasing graduation rates can be partially attributed to grade inflation.[33]

Summary table of the main theoretical frameworks

[edit]

The following table summarizes the main theoretical frameworks behind almost all the theoretical and research work, and the instructional practices in education (one of them being, of course, the practice of assessment). These different frameworks have given rise to interesting debates among scholars.

Topics Empiricism Rationalism Socioculturalism
Philosophical orientation Hume: British empiricism Kant, Descartes: Continental rationalism Hegel, Marx: cultural dialectic
Metaphorical orientation Mechanistic/Operation of a Machine or Computer Organismic/Growth of a Plant Contextualist/Examination of a Historical Event
Leading theorists B. F. Skinner (behaviorism)/ Herb Simon, John Anderson, Robert Gagné: (cognitivism) Jean Piaget/Robbie Case Lev Vygotsky, Luria, Bruner/Alan Collins, Jim Greeno, Ann Brown, John Bransford
Nature of mind Initially blank device that detects patterns in the world and operates on them. Qualitatively identical to lower animals, but quantitatively superior. Organ that evolved to acquire knowledge by making sense of the world. Uniquely human, qualitatively different from lower animals. Unique among species for developing language, tools, and education.
Nature of knowledge (epistemology) Hierarchically organized associations that present an accurate but incomplete representation of the world. Assumes that the sum of the components of knowledge is the same as the whole. Because knowledge is accurately represented by components, one who demonstrates those components is presumed to know General and/or specific cognitive and conceptual structures, constructed by the mind and according to rational criteria. Essentially these are the higher-level structures that are constructed to assimilate new info to existing structure and as the structures accommodate more new info. Knowledge is represented by ability to solve new problems. Distributed across people, communities, and physical environment. Represents culture of community that continues to create it. To know means to be attuned to the constraints and affordances of systems in which activity occurs. Knowledge is represented in the regularities of successful activity.
Nature of learning (the process by which knowledge is increased or modified) Forming and strengthening cognitive or S-R associations. Generation of knowledge by (1) exposure to pattern, (2) efficiently recognizing and responding to pattern (3) recognizing patterns in other contexts. Engaging in active process of making sense of ("rationalizing") the environment. Mind applying existing structure to new experience to rationalize it. You don't really learn the components, only structures needed to deal with those components later. Increasing ability to participate in a particular community of practice. Initiation into the life of a group, strengthening ability to participate by becoming attuned to constraints and affordances.
Features of authentic assessment Assess knowledge components. Focus on mastery of many components and fluency. Use psychometrics to standardize. Assess extended performance on new problems. Credit varieties of excellence. Assess participation in inquiry and social practices of learning (e.g. portfolios, observations) Students should participate in assessment process. Assessments should be integrated into larger environment.

Controversy

[edit]

Concerns over how best to apply assessment practices across public school systems have largely focused on questions about the use of high-stakes testing and standardized tests, often used to gauge student progress, teacher quality, and school-, district-, or statewide educational success.

No Child Left Behind

[edit]

For most researchers and practitioners, the question is not whether tests should be administered at all—there is a general consensus that, when administered in useful ways, tests can offer useful information about student progress and curriculum implementation, as well as offering formative uses for learners.[34] The real issue, then, is whether testing practices as currently implemented can provide these services for educators and students.

President Bush signed the No Child Left Behind Act (NCLB) on January 8, 2002. The NCLB Act reauthorized the Elementary and Secondary Education Act (ESEA) of 1965. President Johnson signed the ESEA to help fight the War on Poverty and helped fund elementary and secondary schools. President Johnson's goal was to emphasize equal access to education and establish high standards and accountability. The NCLB Act required states to develop assessments in basic skills. To receive federal school funding, states had to give these assessments to all students at select grade level.

In the U.S., the No Child Left Behind Act mandates standardized testing nationwide. These tests align with state curriculum and link teacher, student, district, and state accountability to the results of these tests. Proponents of NCLB argue that it offers a tangible method of gauging educational success, holding teachers and schools accountable for failing scores, and closing the achievement gap across class and ethnicity.[35]

Opponents of standardized testing dispute these claims, arguing that holding educators accountable for test results leads to the practice of "teaching to the test." Additionally, many argue that the focus on standardized testing encourages teachers to equip students with a narrow set of skills that enhance test performance without actually fostering a deeper understanding of subject matter or key principles within a knowledge domain.[36]

High-stakes testing

[edit]

The assessments which have caused the most controversy in the U.S. are the use of high school graduation examinations, which are used to deny diplomas to students who have attended high school for four years, but cannot demonstrate that they have learned the required material when writing exams. Opponents say that no student who has put in four years of seat time should be denied a high school diploma merely for repeatedly failing a test, or even for not knowing the required material.[37][38][39]

High-stakes tests have been blamed for causing sickness and test anxiety in students and teachers, and for teachers choosing to narrow the curriculum towards what the teacher believes will be tested. In an exercise designed to make children comfortable about testing, a Spokane, Washington newspaper published a picture of a monster that feeds on fear.[40] The published image is purportedly the response of a student who was asked to draw a picture of what she thought of the state assessment.

Other critics, such as Washington State University's Don Orlich, question the use of test items far beyond standard cognitive levels for students' age.[41]

Compared to portfolio assessments, simple multiple-choice tests are much less expensive, less prone to disagreement between scorers, and can be scored quickly enough to be returned before the end of the school year. Standardized tests (all students take the same test under the same conditions) often use multiple-choice tests for these reasons. Orlich criticizes the use of expensive, holistically graded tests, rather than inexpensive multiple-choice "bubble tests", to measure the quality of both the system and individuals for very large numbers of students.[41] Other prominent critics of high-stakes testing include Fairtest and Alfie Kohn.

The use of IQ tests has been banned in some states for educational decisions, and norm-referenced tests, which rank students from "best" to "worst", have been criticized for bias against minorities. Most education officials support criterion-referenced tests (each individual student's score depends solely on whether he answered the questions correctly, regardless of whether his neighbors did better or worse) for making high-stakes decisions.

21st century assessment

[edit]

It has been widely noted that with the emergence of social media and Web 2.0 technologies and mindsets, learning is increasingly collaborative and knowledge increasingly distributed across many members of a learning community. Traditional assessment practices, however, focus in large part on the individual and fail to account for knowledge-building and learning in context. As researchers in the field of assessment consider the cultural shifts that arise from the emergence of a more participatory culture, they will need to find new methods of applying assessments to learners.[42]

Large-scale learning assessment

[edit]

Large-scale learning assessments (LSLAs) are system-level assessments that provide a snapshot of learning achievement for a group of learners in a given year, and in a limited number of domains. They are often categorized as national or cross-national assessments and draw attention to issues related to levels of learning and determinants of learning, including teacher qualification; the quality of school environments; parental support and guidance; and social and emotional health in and outside schools.[43]

Assessment in a democratic school

[edit]

The Sudbury model of democratic education schools do not perform and do not offer assessments, evaluations, transcripts, or recommendations. They assert that they do not rate people, and that school is not a judge; comparing students to each other, or to some standard that has been set is for them a violation of the student's right to privacy and to self-determination. Students decide for themselves how to measure their progress as self-starting learners as a process of self-evaluation: real lifelong learning and the proper educational assessment for the 21st century, they allege.[44]

According to Sudbury schools, this policy does not cause harm to their students as they move on to life outside the school. However, they admit it makes the process more difficult, but that such hardship is part of the students learning to make their own way, set their own standards and meet their own goals.

The no-grading and no-rating policy helps to create an atmosphere free of competition among students or battles for adult approval, and encourages a positive cooperative environment amongst the student body.[45]

The final stage of a Sudbury education, should the student choose to take it, is the graduation thesis. Each student writes on the topic of how they have prepared themselves for adulthood and entering the community at large. This thesis is submitted to the Assembly, who reviews it. The final stage of the thesis process is an oral defense given by the student in which they open the floor for questions, challenges and comments from all Assembly members. At the end, the Assembly votes by secret ballot on whether or not to award a diploma.[46]

Assessing ELL students

[edit]

A major concern with the use of educational assessments is the overall validity, accuracy, and fairness when it comes to assessing English language learners (ELL). The majority of assessments within the United States have normative standards based on the English-speaking culture, which does not adequately represent ELL populations.[citation needed] Consequently, it would in many cases be inaccurate and inappropriate to draw conclusions from ELL students' normative scores. Research shows that the majority of schools do not appropriately modify assessments in order to accommodate students from unique cultural backgrounds.[citation needed] This has resulted in the over-referral of ELL students to special education, causing them to be disproportionately represented in special education programs. Although some may see this inappropriate placement in special education as supportive and helpful, research has shown that inappropriately placed students actually regressed in progress.[citation needed]

It is often necessary to utilize the services of a translator in order to administer the assessment in an ELL student's native language; however, there are several issues when translating assessment items. One issue is that translations can frequently suggest a correct or expected response, changing the difficulty of the assessment item.[47] Additionally, the translation of assessment items can sometimes distort the original meaning of the item.[47] Finally, many translators are not qualified or properly trained to work with ELL students in an assessment situation.[citation needed] All of these factors compromise the validity and fairness of assessments, making the results not reliable. Nonverbal assessments have shown to be less discriminatory for ELL students, however, some still present cultural biases within the assessment items.[47]

When considering an ELL student for special education the assessment team should integrate and interpret all of the information collected in order to ensure a non biased conclusion.[47] The decision should be based on multidimensional sources of data including teacher and parent interviews, as well as classroom observations.[47] Decisions should take the students unique cultural, linguistic, and experiential backgrounds into consideration, and should not be strictly based on assessment results.

Universal screening

[edit]

Assessment can be associated with disparity when students from traditionally underrepresented groups are excluded from testing needed for access to certain programs or opportunities, as is the case for gifted programs. One way to combat this disparity is universal screening, which involves testing all students (such as for giftedness) instead of testing only some students based on teachers' or parents' recommendations. Universal screening results in large increases in traditionally underserved groups (such as Black, Hispanic, poor, female, and ELLs) identified for gifted programs, without the standards for identification being modified in any way.[48]

See also

[edit]

References

[edit]
  1. ^ Some educators and education theorists use the terms assessment and evaluation to refer to the different concepts of testing during a learning process to improve it (for which the equally unambiguous terms formative assessment or formative evaluation are preferable) and of testing after completion of a learning process (for which the equally unambiguous terms summative assessment or summative evaluation are preferable), but they are in fact synonyms and do not intrinsically mean different things. Most dictionaries not only say that these terms are synonyms but also use them to define each other. If the terms are used for different concepts, careful editing requires both the explanation that they are normally synonyms and the clarification that they are used to refer to different concepts in the current text.
  2. ^ Allen, M.J. (2004). Assessing Academic Programs in Higher Education. San Francisco: Jossey-Bass.
  3. ^ Kuh, G.D.; Jankowski, N.; Ikenberry, S.O. (2014). Knowing What Students Know and Can Do: The Current State of Learning Outcomes Assessment in U.S. Colleges and Universities (PDF). Urbana: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment.
  4. ^ National council on Measurement in Education http://www.ncme.org/ncme/NCME/Resource_Center/Glossary/NCME/Resource_Center/Glossary1.aspx?hkey=4bb87415-44dc-4088-9ed9-e8515326a061#anchorA Archived 2017-07-22 at the Wayback Machine
  5. ^ Nelson, Robert; Dawson, Phillip (2014). "A contribution to the history of assessment: how a conversation simulator redeems Socratic method". Assessment & Evaluation in Higher Education. 39 (2): 195–204. doi:10.1080/02602938.2013.798394. S2CID 56445840.
  6. ^ Suskie, Linda (2004). Assessing Student Learning. Bolton, MA: Anker.
  7. ^ Oxford Brookes University. "Purposes and principles of assessment". www.brookes.ac.uk. Archived from the original on 2018-10-09. Retrieved 2018-10-09.
  8. ^ Black, Paul, & William, Dylan (October 1998). "Inside the Black Box: Raising Standards Through Classroom Assessment."Phi Beta Kappan. Available at http://www.pdkmembers.org/members_online/members/orders.asp?action=results&t=A&desc=Inside+the+Black+Box%3A+Raising+Standards+Through+Classroom+Assessment&text=&lname_1=&fname_1=&lname_2=&fname_2=&kw_1=&kw_2=&kw_3=&kw_4=&mn1=&yr1=&mn2=&yr2=&c1=[permanent dead link] PDKintl.org]. Retrieved January 28, 2009.
  9. ^ Madaus, George F.; Airasian, Peter W. (1969-11-30). "Placement, Formative, Diagnostic, and Summative Evaluation of Classroom Learning". {{cite journal}}: Cite journal requires |journal= (help)
  10. ^ a b c d Mctighe, Jay; O'Connor, Ken (November 2005). "Seven practices for effective learning". Educational Leadership. 63 (3): 10–17. Archived from the original on 6 October 2019. Retrieved 3 March 2017.
  11. ^ Hartelt, T. & Martens, H. (2024). Influence of self-assessment and conditional metaconceptual knowledge on students' self-regulation of intuitive and scientific conceptions of evolution. Journal of Research in Science Teaching, 61(5), 1134–1180. https://doi.org/10.1002/tea.21938
  12. ^ Andrade, H. L. (2019). A critical review of research on student self-assessment. Frontiers in Education, 4, Article 87. https://doi.org/10.3389/feduc.2019.00087
  13. ^ "Robert e. Stake, Director". Archived from the original on 2009-02-08. Retrieved 2009-01-29.
  14. ^ Scriven, M. (1991). Evaluation thesaurus. 4th ed. Newbury Park, CA:SAGE Publications. ISBN 0-8039-4364-4.
  15. ^ Earl, Lorna (2003). Assessment as Learning: Using Classroom Assessment to Maximise Student Learning. Thousand Oaks, CA, Corwin Press. ISBN 0-7619-4626-8
  16. ^ Reed, Daniel. "Diagnostic Assessment in Language Teaching and Learning." Center for Language Education and Research, available at Google.com Archived 2011-09-14 at the Wayback Machine. Retrieved January 28, 2009.
  17. ^ Joint Information Systems Committee (JISC). "What Do We Mean by e-Assessment?" JISC InfoNet. Retrieved January 29, 2009 from http://tools.jiscinfonet.ac.uk/downloads/vle/eassessment-printable.pdf Archived 2017-01-16 at the Wayback Machine
  18. ^ Educational Technologies at Virginia Tech. "Assessment Purposes." VirginiaTech DesignShop: Lessons in Effective Teaching, available at Edtech.vt.edu Archived 2009-02-26 at the Wayback Machine. Retrieved January 29, 2009.
  19. ^ Valencia, Sheila W. "What Are the Different Forms of Authentic Assessment?" Understanding Authentic Classroom-Based Literacy Assessment (1997), available at Eduplace.com Archived 2019-10-28 at the Wayback Machine. Retrieved January 29, 2009.
  20. ^ a b c d e f Brown, Douglas; Abeywickrama, Priyanvada (2010). Language Assessment, Principles and Classroom Practices. The United States of America: Pearson Longman. ISBN 978-0-13-814931-4.
  21. ^ Oxford Brookes University. "Principles of assessment". www.brookes.ac.uk. Retrieved 2018-10-09.
  22. ^ Yu, Chong Ho (2005). "Reliability and Validity." Educational Assessment. Available at Creative-wisdom.com. Retrieved January 29, 2009.
  23. ^ a b Fawcett, Alison (2013). Principles of Assessment and Outcome Measurement for Occupational Therapists and Physiotherapists: Theory, Skills and Application. John Wiley & Sons. ISBN 9781118709696.
  24. ^ a b "Reliability, Validity and Practicality | Teach English | Englishpost.org". Englishpost.org. 2012-06-26. Retrieved 2018-10-30.
  25. ^ Moskal, Barbara; Leydens, Jon (23 November 2019). "Scoring Rubric Development: Validity and Reliability". Practical Assessment, Research, and Evaluation. 7 (1). doi:10.7275/q7rm-gg74.
  26. ^ "Understanding Assessment: Washback and Instructional Planning". www.cal.org. Retrieved 2018-10-29.
  27. ^ Joint Committee on Standards for Educational Evaluation. (1988). "The Personnel Evaluation Standards: How to Assess Systems for Evaluating Educators". Newbury Park, CA: SAGE Publications
  28. ^ Joint Committee on Standards for Educational Evaluation. (1994).The Program Evaluation Standards, 2nd Edition. Newbury Park, CA: SAGE Publications
  29. ^ Committee on Standards for Educational Evaluation. (2003). The Student Evaluation Standards: How to Improve Evaluations of Students. Newbury Park, CA: Corwin Press
  30. ^ City & Guilds, Understanding the Principles and Practice of Assessment: Qualification Factsheet, accessed 26 February 2020
  31. ^ a b Arenson, Karen W. (18 April 2004). "Is It Grade Inflation, or Are Students Just Smarter?". The New York Times. Retrieved 6 December 2015.
  32. ^ Hurwitz, Michael, and Jason Lee. "Grade inflation and the role of standardized testing." Measuring success: Testing, grades, and the future of college admissions (2018): 64-93.
  33. ^ Denning, Jeffrey T., et al. Why have college completion rates increased? An analysis of rising grades. No. w28710. National Bureau of Economic Research, 2021.
  34. ^ American Psychological Association. "Appropriate Use of High-Stakes Testing in Our Nation's Schools." APA Online, available at APA.org, Retrieved January 24, 2010
  35. ^ (nd) Reauthorization of NCLB. Department of Education. Retrieved 1/29/09.
  36. ^ (nd) What's Wrong With Standardized Testing? FairTest.org. Retrieved January 29, 2009.
  37. ^ Dang, Nick (18 March 2003). "Reform education, not exit exams". Daily Bruin. One common complaint from failed test-takers is that they weren't taught the tested material in school. Here, inadequate schooling, not the test, is at fault. Blaming the test for one's failure is like blaming the service station for a failed smog check; it ignores the underlying problems within the 'schooling vehicle.'[permanent dead link]
  38. ^ Weinkopf, Chris (2002). "Blame the test: LAUSD denies responsibility for low scores". Daily News. Archived from the original on 2017-02-02. Retrieved 2010-05-04. The blame belongs to 'high-stakes tests' like the Stanford 9 and California's High School Exit Exam. Reliance on such tests, the board grumbles, 'unfairly penalizes students that have not been provided with the academic tools to perform to their highest potential on these tests'.
  39. ^ "Blaming The Test". Investor's Business Daily. 11 May 2006. A judge in California is set to strike down that state's high school exit exam. Why? Because it's working. It's telling students they need to learn more. We call that useful information. To the plaintiffs who are suing to stop the use of the test as a graduation requirement, it's something else: Evidence of unequal treatment... the exit exam was deemed unfair because too many students who failed the test had too few credentialed teachers. Well, maybe they did, but granting them a diploma when they lack the required knowledge only compounds the injustice by leaving them with a worthless piece of paper." [permanent dead link]
  40. ^ "ASD.wednet.edu". Archived from the original on 2007-02-25. Retrieved 2006-09-22.
  41. ^ a b Bach, Deborah, & Blanchard, Jessica (April 19, 2005). "WASL worries stress kids, schools." Seattle Post-Intelligencer. Retrieved January 30, 2009 from Seattlepi.nwsource.com.
  42. ^ Fadel, Charles, Honey, Margaret, & Pasnik, Shelley (May 18, 2007). "Assessment in the Age of Innovation." Education Week. Retrieved January 29, 2009 from http://www.edweek.org/ew/articles/2007/05/23/38fadel.h26.html
  43. ^ UNESCO (2019). The promise of large-scale learning assessments: acknowledging limits to unlock opportunities. UNESCO. ISBN 978-92-3-100333-2.
  44. ^ Greenberg, D. (2000). 21st Century Schools, edited transcript of a talk delivered at the April 2000 International Conference on Learning in the 21st Century.
  45. ^ Greenberg, D. (1987). Chapter 20,Evaluation, Free at Last — The Sudbury Valley School.
  46. ^ Graduation Thesis Procedure, Mountain Laurel Sudbury School.
  47. ^ a b c d e "Archived copy" (PDF). Archived from the original (PDF) on 2012-05-29. Retrieved 2012-04-11.{{cite web}}: CS1 maint: archived copy as title (link)
  48. ^ Card, D., & Giuliano, L. (2015). Can universal screening increase the representation of low income and minority students in gifted education? (Working Paper No. 21519). Cambridge, MA: National Bureau of Economic Research. Retrieved from www.nber.org/papers/w21519

Sources

[edit]

Further reading

[edit]
  • American Educational Research Association, American Psychological Association, & National Council for Measurement in Education. (2014). Standards for Educational and Psychological Testing. Washington, DC: American Educational Research Association.
  • Bennett, Randy Elliot (March 2015). "The Changing Nature of Educational Assessment". Review of Research in Education. 39 (1): 370–407. doi:10.3102/0091732x14554179. S2CID 145592665.
  • Brown, G. T. L. (2018). Assessment of Student Achievement. New York: Routledge.
  • Carless, David. Excellence in University Assessment: Learning from Award-Winning Practice. London: Routledge, 2015.
  • Klinger, D., McDivitt, P., Howard, B., Rogers, T., Munoz, M., & Wylie, C. (2015). Classroom Assessment Standards for PreK-12 Teachers: Joint Committee on Standards for Educational Evaluation.
  • Kubiszyn, T., & Borich, G. D. (2012). Educational Testing and Measurement: Classroom Application and Practice (10th ed.). New York: John Wiley & Sons.
  • Miller, D. M., Linn, R. L., & Gronlund, N. E. (2013). Measurement and Assessment in Teaching (11th ed.). Boston, MA: Pearson.
  • National Research Council. (2001). Knowing What Students Know: The Science and Design of Educational Assessment. Washington, DC: National Academy Press.
  • Nitko, A. J. (2001). Educational assessment of students (3rd ed.). Upper Saddle River, N.J.: Merrill.
  • Phelps, Richard P., Ed. Correcting Fallacies about Educational and Psychological Testing. Washington, DC: American Psychological Association, 2008.
  • Phelps, Richard P., Standardized Testing Primer. New York: Peter Lang, 2007.
  • Russell, M. K., & Airasian, P. W. (2012). Classroom Assessment: Concepts and Applications (7th ed.). New York: McGraw Hill.
  • Shepard, L. A. (2006). Classroom assessment. In R. L. Brennan (Ed.), Educational Measurement (4th ed., pp. 623–646). Westport, CT: Praeger.