Central to the sociology of education are questions about how schools operate to produce learning. Sociological models of schooling recognize that school systems are complex organizations, with a technical core consisting of classrooms in which teaching and learning – the core technology of schooling – take place. The concept of opportunities for learning provides a key to answering questions about how schools and classrooms can produce higher rates of learning (educational productivity), and about why some students learn more than others (educational inequality).
The notion is a simple one: students are more likely to learn what they have been taught, and stand less chance of learning what they have not been taught. More generally, the concept of opportunities for learning refers to the content and quantity of curricular materials, activities, and assignments that students encounter in their classrooms; that is, the manifest curriculum, as opposed to the hidden curriculum. Analysts frequently refer to three dimensions of the manifest curriculum: the intended curriculum, which the state or district may set forth, the enacted curriculum, which teachers cover in classrooms, and the achieved curriculum, what students actually learn (e.g., Smithson & Porter 2004). Opportunities for learning are reflected in the enacted curriculum.
Background to Research on Opportunities for Learning
An early use of the concept appeared in Car roll’s (1963) ”model for school learning.” For Carroll, ”opportunity to learn” referred to the time available for learning which, when combined with the quality of instruction and student effort and ability, would have a major influence on student performance. Recent writers use the term more broadly to refer not only to time, but also to instructional content and pedagogy. While a few studies followed up directly on Carroll’s model, two subsequent literatures served as major stimuli to research on opportunities for learning: studies of tracking and ability grouping, and international comparative research on educational achievement.
Opportunities for Learning in Research on Tracking
In response to the Coleman Report’s findings that student achievement varied far more within schools than among schools (Coleman et al. 1966), sociologists began to explore aspects of schools that contributed to within school achievement inequality. Chief among these was the practice of dividing students for instruction according to their purported interests and abilities into separate groups or tracks. Students assigned to higher status positions learned more, and students in lower status positions learned less over time, even after taking account of differences among students before they were assigned to their tracks. Why did this occur? Researchers have pointed to differentiated opportunities for learning as a major explanation for growing inequality of achievement among students placed in different groups and tracks (Barr & Dreeben 1983; Rowan & Miracle 1983; Gamoran et al.1995, 1997; Applebee et al. 2003). Teachers in high track groups and classes tend to cover richer material at a faster pace, while teachers in low track classes commonly introduce a more fragmented, slower paced curriculum. In the US and internationally, learning opportunities are embedded in curricular divisions and course sequences (Oakes et al. 1992).
Partly due to their association with tracking, opportunities for learning tend to be differentially available to students from different racial, ethnic, and economic groups (Oakes et al. 1992). When students are divided for instruction on the basis of prior achievement, they are also separated by social background, which tends to be correlated with achievement. Consequently, when opportunities for learning differ across tracks, they differ for students from varied back grounds. Learning opportunities may also vary from one school to the next, for example when schools with small numbers of high achieving students fail to offer advanced courses such as
middle school algebra or high school physics, calculus, or AP English. This practice also works to the disadvantage of students from minority and low income backgrounds, who are overrepresented in such schools. Schools with predominantly working class populations may also offer less challenging elementary curricula than schools with middle class populations. These school level patterns may be mitigated by a countervailing tendency: even schools with many low achieving students tend to have some high level classes, and students from dis advantaged backgrounds have relatively greater access to these classes in such schools, apparently due to the lower levels of competition for enrollment. Overall, however, unequal access to valued opportunities for learning is an important dimension of educational inequality for students from varied racial, ethnic, and socio economic groups.
Learning opportunities have also played a prominent role in gender inequality, specifically in the gaps between boys and girls in mathematics and science achievement. US based research in the 1980s showed that test score differences among secondary school boys and girls were substantially attributable to differences in mathematics and science course taking (Pallas & Alexander 1983). More recently, gender gaps in both achievement and course taking are smaller, with girls enrolled at equal or higher rates in most advanced courses, and boys’ advantage now limited to physics and calculus (US Department of Education 1999). Similar patterns are emerging in other nations (e.g., Croxford 1994).
International Comparisons of Educational Achievement
Another major motivation for research on opportunities for learning has come from research on international comparisons of educational achievement, primarily in the areas of mathematics and science (Floden 2003). Dating at least back to the First International Mathematics Study (Husen 1967), researchers and policymakers have been interested not only in how nations compare in the performance of their students, but also in what some reasons might be for cross national variation. Differences in opportunities for learning were seen as a prominent possibility, so researchers asked teachers to indicate what proportion of their students had the opportunity to learn each item on the international test. The original purpose of the question was to check whether the com mon international test was equally appropriate for each country, but the question took on more policy relevance by the time of the Second International Mathematics Study (McKnight et al. 1987), when it was refined to focus on whether the topic reflected by the test item had been covered in that year. Even more specific questions were posed to teachers about classroom coverage and other potential sources of opportunities for students to learn content in science as well as in mathematics for the Third International Mathematics and Science Study and its successors, which have been renamed the Trends in Mathematics and Science Study (TIMSS). Generally, countries in which achievement levels are higher also provide more opportunities for students to learn tested content (Floden 2003), leading to speculation that differences within countries may also be attributable to differences in students’ opportunities for learning (Schmidt et al. 1999). From this literature, the concept of opportunities for learning has emerged as a major policy variable, potentially manipulable as a force to improve achievement levels and reduce inequality.
Measuring Opportunities for Learning
A major challenge to testing hypotheses about the impact of opportunities for learning on student achievement within countries – and to using opportunities for learning as a policy instrument – is the difficulty of developing powerful measures. While aggregate associations between tested content and average performance are high, the associations are more modest at the individual level within countries (Floden 2003). Moreover, the international studies’ approach to measuring ”opportunity to learn” focuses exclusively on content topics, whereas the general concept is broader, particularly in its policy use. While early studies in the US focused on time as the indicator of opportunity, more recent concerns emphasize additional elements including time allocated to specific activities, salience of content areas, cognitive demands in tasks for students, teacher-student interaction in classrooms, the use of homework assignments, and other aspects of teachers’ instructional strategies.
In first grade reading, Barr and Dreeben (1983) discovered that a count of the number of new words and phonics to which students were introduced, along with the time spent on these activities, was highly predictive of students’ learning over the course of first grade: the more students were taught, the more they learned. Gamoran showed that these measures of opportunity accounted for all of the effects of ability grouping on first grade reading achievement. Rowan and Miracle (1983) similarly demonstrated that one could predict third grade reading achievement by counting the number of stories to which students were exposed in their reading materials. However, as students progress through the grade levels, it becomes increasingly difficult to measure opportunities for learning by simply counting the curricular units to which students have been exposed. Nystrand responded to this challenge by using classroom observations to assess learning opportunities embedded in teacher-student interaction, focusing particularly on whether classroom instruction is ”dialogic,” involving authentic exchanges of information between teachers and students and among students, instead of recitation of facts and ideas predetermined by teachers. While dialogic instruction predicts learning in English and social studies, and partially accounts for the effects of ability grouping in English (Gamoran et al. 1995; Applebee et al. 2003), it is not as powerful a predictor as curricular units in the early elementary years, and it lacks the content focus of those measures. Observational measures of learning opportunities have the advantage of being objectively rated across a wide range of classrooms, but the high cost of sending observers to classrooms may limit the scale to which this approach can be applied.
Other approaches to measuring opportunities for learning have adopted much more fine grained perspectives. Porter (2002) has used questionnaires to ask mathematics teachers to report their instruction on a grid of 93 mathematics topics (e.g., place value, functions) by six cognitive demands (memorize facts, understand concepts, perform procedures, collect/interpret data, solve word problems, and solve novel problems). The resulting grid can then be mapped onto an assessment to indicate the degree of alignment between instruction and assessment. Similar schemes have been developed for science (Smithson & Porter 2004), and work is underway on a comparable approach to asses sing opportunity in literacy. Gamoran et al. (1997) found that Porter’s approach to measuring opportunities predicted learning gains in mathematics, and explained most of the variation between different types of ninth grade mathematics classes (e.g., general math, algebra). Floden (2003) has noted two limitations of the approach: first, completing the survey is demanding and time consuming for teachers; and second, if opportunity is measured according to the alignment between instruction and assessment, that may only be useful information if the assessment is inherently meaningful. Current work by Smithson and Porter (2004) has extended the approach to develop the Survey of Enacted Curriculum, a tool for assessing the degree of alignment between state, district, or professional standards, instruction, and assessment. This development provides a way to ensure that the opportunities measured with the teacher survey are indeed meaningful, insofar as they are represented in external standards.
Recent US national surveys have included questionnaires for teachers about their instructional strategies and emphases, and these items have been used as measures of learning opportunities to predict student achievement. National surveys are much less detailed than Porter’s (2002) fine grained approach, and one may question whether they have sufficient reliability to serve as adequate measures of opportunity. Burstein et al. (1995) argued that mismatches between measures obtained from year end questionnaires and those from ongoing teacher logs indicated that questionnaire measures tended to be unreliable, although the authors found that questions about time spent in particular activities could be addressed more reliably than questions about general emphases on instructional topics or goals. Mayer (1999) also questioned the reliability of teacher survey measures, but he acknowledged that scales constructed from such measures may have sufficient reliability to be useful. Reviewing
progress in the development of opportunity to learn measures, Floden (2003) commented that just as achievement tests have been developed over several iterations, with more reliable items replacing less reliable ones over time, survey measures of opportunity to learn may also improve with experience.
New international comparative work has also focused on differences in the cognitive demands of instruction and assessment (Klieme & Baumert 2001). This research has identified distinct national profiles of learning outcomes, which are interpreted as reflecting different opportunities for learning present in instructional approaches that vary internationally.
Opportunities for Learning in an Era of Accountability: A US Example
Opportunities for learning have been examined in many national contexts, with a common focus on the differentiation of opportunities to students of varied backgrounds and destinations (Oakes et al. 1992). The US provides an example of how this research may enter the policy realm.
Issues of opportunities for learning have taken on new salience in the US in light of recent changes in federal education policy.
The No Child Left Behind Act of 2001 (NCLB)
requires states to set standards for student performance, to assess students according to these standards, and to hold schools accountable for achieving standards. Schools must meet standards not only for the performance of their students on average, but also for a variety of sociodemographic subgroups, including those who – in part due to limited learning opportunities – have traditionally underperformed. If schools are to be held accountable for the performance of all students, the goal of creating more equitable opportunities for learning gains increasing prominence. Moreover, if schools are applying sanctions to students who fail to meet standards, as is occurring in many districts, questions of fairness may arise if students are being held accountable for learning yet lack the relevant learning opportunities.
As the US standards movement developed from A Nation at Risk in the early 1980s through Goals 2000 during the 1990s, efforts to balance the federal role with states’ rights and the tradition of local control over education included plans to allow states to set content standards for opportunities for learning, over seen by a federal agency that would certify the quality of state standards. This plan was soon abandoned in the face of two major objections. First, states were unwilling to subject their standards to federal oversight, on grounds of both local autonomy and the cost of enforcement. Second, scholars and policymakers alike questioned the strategy of monitoring input standards, when output standards (i.e., student achievement) were the ultimate goal. After all, even the best measures of opportunities for learning exhibit substantial slippage from opportunities to performance, as has long been recognized in the distinction between the enacted and the achieved curriculum. As Porter (1994: 431— 2) noted, ”opportunity to learn does not trans late directly into student achievement. Schools must provide a quality educational experience, and students must apply themselves.”
The most recent federal legislation, NCLB, does not attempt to legislate opportunities for learning directly. Instead, NCLB’s approach to improving learning opportunities is to call for ”highly qualified teachers” in every classroom. In particular, NCLB emphasizes subject matter expertise, on the theory that teachers with greater subject matter expertise will provide more rigorous, content focused instruction that will enable students to reach achievement standards. On the one hand, it is hard to argue against teachers having greater subject matter knowledge. On the other hand, only a modest research base supports teacher content knowledge as a lever for change in student achievement. Consequently, the impact of increasing access to ”highly qualified teachers” cannot be predicted at this time. The importance of high quality opportunities for learning is also reflected in NCLB’s demand for instructional practices that reflect scientific evidence of their effectiveness. Whether such practices can be identified and implemented on a mass scale also remains to be seen.
Sociologists have long recognized that the contribution of schools and schooling to variation in learning among individual students is modest, compared to the importance of family background (Coleman et al. 1966). Schools produce learning, but variation from one school to another is relatively small compared to the wider variation within schools. Among the elements of schooling that do matter for how much learning schools produce — and why some students learn more than others — opportunities for learning are perhaps the most powerful predictor that has yet been detected. Despite the challenges of measuring opportunities, a variety of viable schemes has been developed, and ongoing research in this area will likely lead to further improvement. Opportunities for learning thus offer potential leverage for policy intervention.
References:
- Applebee, A. N., Langer, J., Nystrand, M., & Gamoran, A. (2003) Discussion-Based Approaches to Developing Understanding: Classroom Instruction and Student Performance in Middle and High School English. American Educational Research Journal 40: 685-730.
- Barr, R. & Dreeben, R. (1983) How Schools Work. University of Chicago Press, Chicago.
- Burstein, L., McDonnell, L. M., Van Winkle, J., Ormseth, T., Mirocha, J., & Guitton, G. (1995) Validating National Curriculum Indicators. RAND, Santa Monica, CA.
- Carroll, J. (1963) A Model for School Learning. Teachers College Record 64: 723-33.
- Coleman, J. S., Campbell, E. Q., Hobson, C. F., McPartland, J. M., Mood, A. M., Weinfeld, F. D., & York, R. L. (1966) Equality of Educational Opportunity. US Department of Education, Washington, DC.
- Croxford, L. (1994) Equal Opportunities in the Secondary School Curriculum in Scotland, 1971 1991. British Educational Research Journal 20: 371-91.
- Floden, R. E. (2003) The Measurement of Opportunity to Learn. In: Porter, A. C. & Gamoran, A. (Eds.), Methodological Advances in Cross National Surveys of Educational Achievement. National Academy Press, Washington, DC, pp. 231-66.
- Gamoran, A., Nystrand, M., Berends, M., & LePore, P. C. (1995) An Organizational Analysis of the Effects of Ability Grouping. American Edu cational Research Journal 32: 687-715.
- Gamoran, A., Porter, A. C., Smithson, J., & White, P. A. (1997) Upgrading High School Mathematics Instruction: Improving Learning Opportunities for Low-Income, Low-Achieving Youth. Educational Evaluation and Policy Analysis 19: 325-38.
- Husen, T. (Ed.) (1967) International Study of Achievement in Mathematics: A Comparison of Twelve Countries. Wiley, New York.
- Klieme, E. & Baumert, J. (2001) Identifying National Cultures of Mathematics Education: Analysis of Cognitive Demands and Differential Item Functioning in TIMSS. European Journal ofPsychology of Education 16: 385-402.
- McKnight, C. C., Crosswhite, F. J., Dossey, J. A., Kifer, E., Swafford, J. O., Travers, K. J., & Coo-ney, T. J. (1987) The Underachieving Curriculum. Stipes, Champaign, IL.
- Mayer, D. P. (1999) Measuring Instructional Practice: Can Policymakers Trust Survey Data? Edu cational Evaluation and Policy Analysis 21: 29-45.
- Oakes, J., Gamoran, A., & Page, R. N. (1992) Curriculum Differentiation: Opportunities, Outcomes, and Meanings. In: Jackson, P. W. (Ed.), Handbook ofResearch on Curriculum. Macmillan, New York, 570-608.
- Pallas, A. & Alexander, K. L. (1983) Sex Differences in Quantitative SAT Performance: New Evidence on the Differential Coursework Hypothesis. American Educational Research Journal 20: 165-82.
- Porter, A. C. (1994) National Standards and School Improvement in the 1990s: Issues and Promise. American Journal of Education 102: 421-49.
- Porter, A. C. (2002) Measuring the Content of Instruction: Uses in Research and Practice. Educational Researcher 31(7): 3-14.
- Rowan, B. & Miracle, A. W., Jr. (1983). Systems of Ability Grouping and the Stratification of Achievement in Elementary Schools. Sociology of Education 56: 133-44.
- Schmidt, W. H., McKnight, C. C., Cogan, L. S., Jakwerth, P. M., & Houang, R. T. (1999) Facing the Consequences: Using TIMSS for a Closer Look at US Mathematics and Science Education. Kluwer Academic Press, Boston.
- Smithson, J. & Porter, A. C. (2004) From Policy toPractice: The Evolution of One Approach to Describing and Using Curriculum Data. In: Wilson, M. (Ed.), Towards Coherence Between Classroom Assessment and Accountability. One Hundred and Third Yearbook ofthe National Society for the Study of Education, Part II. National Society for the Study of Education, Chicago, pp. 105-31.
- US Department of Education (1999) The Digest of Educational Statistics, 1999. US Department of Education, Washington, DC.