Nate Eastman, Associate Professor of English, Earlham College
May 1, 2017
Introduction
In his 2003 Lancet article “Errors: Incompetence, Ineptitude or Failure” Lionel Opie writes that “errors are of most interest when they reflect overt incompetence or ineptitude […] or failure to do what should have been done.”[1] This is another facet of a phenomenon Linus Torvalds described in a 2012 Slashdot interview (in response to a question about “microkernel” architecture):
The fact is, reality is complicated, and not amenable to the “one large idea” model of problem solving. The only way that problems get solved in real life is with a lot of hard work on getting the details right. Not by some over-arching ideology that somehow magically makes things work.
We all know this, but its reiteration is instructive: we solve problems by getting the details right. Much of my discussion as convener of our First-Year Seminar program, and as a Teaching and Learning Consultant, is about big ideas: experiential and project-based learning, flipped classrooms, high-impact experiences, and new concepts for signature academic programs.
But when we fail students, the root cause is rarely a failure of vision. It’s that something wasn’t done that ought to have been done; the failure is in implementation rather than concept: An assignment that relies on student groups meeting on a Saturday or Sunday, even though student athletes routinely travel off campus on weekends; students who fail an exam en masse after a series of under-attended review sessions; a student who enters an advanced class without necessary knowledge and skills that they (and faculty) know they ought to have picked up in its prerequisites.
Opie and similar thinkers would call these errors of ineptitude. As distinct from errors of ignorance, errors of ineptitude are defined by a failure to act on information that is already known. We know student athletes travel on weekends, that study sessions must be advertised, and that knowledge and skills have prerequisites, but realize that knowledge imperfectly. Opie’s “failure to do what should have been done” can mean “failure to make assumptions explicit,” “failure to consult an easily available resource,” or “failure to complete a simple, known task.” Examples might include:
- I include outside-of-class film screenings on a course syllabus. While it is obvious to me that attendance is required (because they’re on the syllabus), my students think the screenings are optional (because they’re outside of class hours).
- I, and everyone in my department, sets the same deadlines for our classes’ major projects. We aren’t surprised when our majors’ projects look rushed or haphazard, because this cross-scheduling happens almost every year.
- My department fails to collect syllabi for every class we offer that semester. The failure is discovered five years later, when they’re wanted for accreditation or in service of a student petition.
- When a student asks if I can meet at 2:30 next Wednesday, I say yes, only to discover that I already had a 2:30 meeting on a calendar I didn’t bother to consult.
Ineptitude isn’t the word I normally use when I hear about these incidents and others like them. My co-workers and students aren’t inept, and their errors of ineptitude stand in stark and baffling contrast with their usual habits of accomplishment. When a student misses an appointment, or a faculty member misses a deadline, my first response is usually, “this isn’t the person I know. Life is complicated, but this thing was easy, and it ought to have been done right. How did we get to a point where someone so good missed something so basic?”
Reading (or Not)
I’d like to use student reading (and the design of reading assignments) as a practical example of this problem. The complaint that students don’t regularly complete assigned reading is often warranted: Studies suggest completion rates of about 30%[2]. It’s safe to assume that most students are not ignorant of their reading assignments, and so at first glance the problem of student reading is really a problem of ineptitude writ large.
A second safe assumption is that students are rational and well-intentioned actors – that is, students choose not to read because they have a limited amount of time and competing priorities, like sports and jobs and family commitments. While we might disagree with how our students rank those priorities, we can at least acknowledge that they exist and have specific claims on students’ time and energy.
I do this with what I call the triple bottom line. Traditional backwards design works from curricular goals to course goals, and from there to assignments, readings, and daily activities. It assesses – or neglects to assess itself – according to whether those goals are met. It has a single bottom line. There’s nothing particularly wrong with that, but the process is also constraint blind; that is, it suggests that (for instance) if a course exercise requires students and faculty to invest dozens of hours to yield only marginal returns, the exercise is an unqualified success.
The results of this practice are familiar. Studies of first-year writing programs place instructors’ total time-on-course at between 231 and 312 hours. That works out to somewhere between 16 and 18 hours per week for a single class.[3] Likewise, the fact that instruments such as NSSE (National Survey of Student Engagement) indicate that students spend about 15 hours per week in preparation for classes[4] (rather than the 30 mandated by Federal credit hour guidelines), suggests that colleges demand (or pretend to demand) more time than students are willing to allocate. Assuming that students realistically cannot commit more time to coursework, it seems sensible to design curricula that make the most of the time they can commit.
Designing according to the triple bottom line, in contrast, makes student and faculty time co-equal to course goals, such that a course, assignment, or student experience is only effective if:
- It targets explicit, well-defined, and scaffolded learning goals in a sensible curricular and institutional context, and
- It makes intentional use of reasonable and explicitly-defined amounts of student time, and
- Its design, administration, evaluation, and assessment make intentional use of reasonable and explicitly-defined amounts of faculty time.
In other words, designing according to the triple bottom line changes the language that we use to talk about commitments of student and instructor time. Rather than asking ourselves whether a time commitment is unreasonable, we should ask whether any given unit of time is used optimally.
Ordinary Ineptitude
A reading assignment that meets all three bottom lines could be neatly contrasted with a reading assignment that take what Lowman (1995) calls the “lasseiz-faire approach” to reading, in which instructors:
announce assigned chapters, problem sets, or papers in the syllabus and rarely mention them again […] [T]hough requiring less effort and responsibility on the part of the instructor [this approach] sets up many students to achieve far less in a class than they would have done under more engaging and sophisticated instructor leadership (p. 230).
This “lasseiz-faire approach” fails (or at least does not intentionally succeed) according to the triple bottom line because it – most conventionally – doesn’t involve explicit and intentional learning goals, doesn’t consider student time commitments, and – most interestingly – uses instructor time sub-optimally (i.e. it requires very little instructor time but arguably wastes all of it).
Outside those bounds of total failure is this typical first-year reading assignment:
Assignment | Bring to Class |
Death by Black Hole to p. 37. Response #1. | DBBH. This section is on Moodle in case your book’s not yet in. |
The response due along with the reading asks students to select the passages that they thought most and least compelling, and to briefly explain the reasons why. The assignment further explains that the response will inform a discussion in which the class discerns effective writing strategies. An anonymous survey of the class indicated an assignment completion rate of about 94%.
A statistical rendering of a single class is just a numerical anecdote, and I don’t want to use it to support a point, but rather to raise a question: why the difference? It might be attributable to a small class size, the nature or author of the reading (an essay by Neil DeGrasse Tyson), or its early placement in the academic year (i.e. before competing demands for student time fully emerge). It is also likely attributable to students’ ability to understand the reading assignment’s explicit goals and make reasonable (and accurate) inferences about its implicit ones.
Learning Goals
The common complaint that students don’t read for class often encodes a subordinate complaint: that they don’t read well – for which the usual remedy is explicit instruction in how to read. This may solve some problems, but not one of the most common: a simple lack of clarity concerning what constitutes successful reading.
Barre and Esaray’s Course Workload Estimator considers reading as one of three types.
- Survey: Reading to catalogue main ideas or content. In this type of reading, it is OK to skip entire portions of the text. Most textbook reading is of this type.
- Understand: Reading to understand the meaning of each sentence rather than each idea, or to attend to the writer’s style, tone, or technique.
- Engage: Reading in order to work problems, draw inferences, question, or evaluate. After reading in this way, students might reasonably be asked to respond to a complex thesis from the reading as represented in the author’s own words.[5]
Most readers understand these differences implicitly, and have little trouble accepting that it is unusual to read a textbook for style for the same basic reasons it is unusual to read a novel for main ideas.
At the same time – unless made explicit – these differences are likely to confuse any student who wonders whether they have completed the reading successfully. If, for instance, a student has questions about differences between PNP and NPN transistors after reading a chapter on transistor design, how ought they proceed? It’s possible that:
- They’re expected to come to class knowing the difference (in which case the student ought either re-read or seek assistance).
- They’re expected to come to class ready to discuss areas of confusion (in which case the student ought to prepare specific questions for the next class session).
- The difference between PNP and NPN transistors is tangential or irrelevant, since the course is about technical writing and class discussion will concern specific elements of the reading’s style.
- They don’t need to fully understand the conceptual differences between PNP and NPN transistors as long as they can solve the problems at the end of the chapter.
In other words, the issue isn’t necessarily that students survey what they ought to engage; it’s that answering the question of what constitutes successful reading relies on students making complicated inferences about the purposes of a reading based on sparse or seemingly contradictory information. It might be possible to solve Shockley equations without understanding the conceptual differences between PNP and NPN transistors (it is), but that conceptual difference might also inform more advanced knowledge and skills (it does).
The problem is likely exacerbated when students work under unrecognized time constraints, and those inferences decide which parts of the reading to skim, which parts to read carefully, and which parts not to read at all.
Student Time
While the federal definition[6] of course credit hours assumes a minimum of “two hours of out-of-class student work per week for a semester hour,” and a student should consequently assume at least six hours of out-of-class work per week for each 3-credit course, typical NSSE data support the conclusion that most students spend considerably less.[7] Given a typical load of 15 credit hours, of work study or other employment, and of commitments to extra- and co- curricular activities (as well as other forms of non-credit-bearing collegiate work such as theses, internships, presentations, and independent research) this is understandable even if it is unfortunate.
In other words, the first step to ensuring that students read is to align the type and amount of reading you expect with the time a student can realistically commit. The following table[8] may be instructive:
Type of Reading | 450 Words/ page | 600 Words/ page | 750 Words/ page |
Survey; No New Concepts (500 wpm) | 67 pages per hour | 50 pages per hour | 40 pages per hour |
Survey; Some New Concepts (350 wpm) | 47 pages per hour | 35 pages per hour | 28 pages per hour |
Survey; Many New Concepts (250 wpm) | 33 pages per hour | 25 pages per hour | 20 pages per hour |
Understand; No New Concepts (250 wpm) | 33 pages per hour | 25 pages per hour | 20 pages per hour |
Understand; Some New Concepts (180 wpm) | 24 pages per hour | 18 pages per hour | 14 pages per hour |
Understand; Many New Concepts (130 wpm) | 17 pages per hour | 13 pages per hour | 10 pages per hour |
Engage; No New Concepts (130 wpm) | 17 pages per hour | 13 pages per hour | 10 pages per hour |
Engage; Some New Concepts (90 wpm) | 12 pages per hour | 9 pages per hour | 7 pages per hour |
Engage; Many New Concepts (65 wpm) | 9 pages per hour | 7 pages per hour | 5 pages per hour |
The distinction between types of reading is less important than the range of reading speeds; students effectively skim for concepts at 250-500 words per minute and read carefully (i.e. at the level of most instructors’ expectations) at about one-third to one-half that speed – as slow as 65 words per minute while carefully reading concept-rich material.
The sample reading assignment (about 35 400-word pages of Neil DeGrasse Tyson essays) would clock in at about an hour; according to Barre and Esaray’s corresponding tabulation of student writing times, the 250-word response accompanying the reading clocks in at another 45 minutes. This is roughly in line with federal credit hour guidelines (assuming a one-hour class session), and about twice what students report on the NSSE.
Whatever factors are in play, in other words, this sample assignment requires significantly more time than the average instructor could expect the average student to commit; as a consequence, it seems reasonable that the reading assignment (or the response attached to it) ought to make clear which portions or aspects of the reading are most important. Other considerations aside, a student who fails to complete an assignment ought at least fail gracefully.
The point isn’t so much that students read more slowly than we think (although a survey of our own first-year Seminar syllabi suggests as much), that NSSE data are depressing, or that one standard or another ought to be used to calibrate student reading assignments; it’s that reading assignments are frequently developed and sequenced without any detailed or structured consideration of the time that they’ll require, or without reference to how much time is appropriate. In other words, this is a textbook case of coordinate ineptitude: although we all know students’ assignment time is limited – both by federal guidelines and by student practice – assignment designs rarely acknowledge this limitation explicitly.
Reading Assignment Design
All of this is obvious, and compels the equally obvious conclusion that any reading assignment ought clearly state:
- Whether the reading is required or recommended,
- How the reading should be obtained (e.g. the bookstore, a library reserve, CMS, or an external website),
- How the reading will be used in class (e.g. to generate discussion questions, inform in-class project work, or develop research topics),
- How the reading relates to other learning goals, (e.g. the differences between PNP and NPN transistors will be central to Ceramics 665), and
- Criteria for successful reading. Sample reading criteria (or goals) might look like this:
After completing the reading, you should be able to:
- Describe the differences between PNP and NPN transistors, and identify the transistor types represented in simple diagrams.
- Be prepared to discuss the reading’s clearest descriptions of technically complex subjects.
- Describe the relationship between base-emitter voltage and collector current (the Ebers-Moll relationship) and its representation in Shockley equations.
Assignment also ought to be designed according to informed estimates of their completion time. If we were really concerned about failure modes (and we should be), each reading would also suggest what to do if a student believes they have not met the criteria for reading success: come to class with prepared questions, visit a learning center, or consult a supplement.
This all admits some obvious conclusions that are worth stating explicitly:
- Every reading assignment in every class ought to include criteria for successful reading.
- Every reading assignment ought not require time in excess of federal credit hour guidelines; every reading assignment probably ought not require more time than is roughly consistent with students’ informal workload expectations (as expressed through e.g. NSSE data).
- Most faculty, given a clear demonstration of the problems caused by not doing (1) and (2), will agree with the necessity of doing them.
- Most faculty who agree with the necessity of (1) and (2) will not do them, or will not do them consistently, because their time and attention are limited resources under constant personal and professional demand.
Instructor Time: A Note
I’ll start with a generalization about point #4 above: colleges often produce conditions for faculty that look very much like the conditions that faculty produce for students in reading assignments. In the absence of specific and explicit goals, it is difficult for students and faculty alike to decide what not to do when faced with limited resources and competing priorities. In other words, the problem of instructor time looks something like this:
- Instructors’ expected workload is not intentionally balanced against the amount of time they could reasonably be expected to work;
- We – that is, the College, program designers, and self-constituted groups of faculty – assume that instructors understand how their work will be used, and when and why it will be important;
- Instructors’ criteria for teaching success are unclear or unstated, such that reasonable inferences about what constitutes success can be wildly divergent.
Too often, best practices in writing instruction assume that either (a) instructor time is effectively unlimited, and that (b) time-intensive instruction techniques are also best practices. The college-published manual given to me during my orientation to teaching first-year seminars declared that “weekends are made for grading.” To revisit an earlier statistic, studies on first-year writing courses place total time-on-course at between 231 and 312 hours.[9] Under a three-course load (and without research expectations), that translates into about sixty hours per working week. Regardless of whether those instructors are adjuncts or tenure-track, that is a phenomenal investment of time; a program or college could reasonably ask how well it’s being used.
Knowledge workers’ productivity is notoriously hard to quantify. Career game designer Evan Robinson’s whitepaper on programmer productivity acknowledges as much, and often reasons using comparisons to studies in manufacturing and construction. Nevertheless, it’s worth quoting at length:
Workers can maintain productivity more or less indefinitely at 40 hours per five-day workweek. When working longer hours, productivity begins to decline. Somewhere between four days and two months, the gains from additional hours of work are negated by the decline in hourly productivity. In extreme cases (within a day or two, as soon as workers stop getting at least 7-8 hours of sleep per night), the degradation can be abrupt.[10]
While Robinson draws on about twenty studies of worker productivity conducted over the last century, the numbers that likely matter most to the professoriate are here:
Productivity drops immediately upon starting overtime and continues to drop until, at approximately eight 60-hour weeks, the total work done is the same as what would have been done in eight 40-hour weeks.
That is – at least in every field that has so far been rigorously studied[11] – longer hours lead to lower immediate per-hour productivity (i.e. your ninth hour of the day is less productive than your fourth), and also to cumulative, long-term productivity declines (i.e. given sixty hour weeks, your fourth hour of day five is more productive than your fourth hour of day forty). Those declines reach an inflection point where a team working fifty- or sixty-hour weeks accomplishes less per week than a team working forty-hour weeks; later, they reach a second inflection point where the team would have accomplished more by working forty-hour weeks from the outset.
While these inflection points are subject to all kinds of subtleties and considerable variation among individuals – not to mention differences among industries – a conventional sixteen-week semester comfortably passes both inflection points by several weeks.
If course instructors were craftsmen, soldiers, or factory workers, the management decision to allow 60-hour weeks over the course of a semester would be rightly derided as a massively expensive error at odds with a century’s worth of available data – not just because those workers get paid overtime, but because working 40-hour weeks would have accomplished more.
Any enterprising faculty member would probably note that all those occupations involve some amount of physical labor, and that they themselves can clock a lifetime of sixty-hour weeks at some peak level of intellectual performance. Everybody says this, and for the same basic reasons that a drunk insists he’s fine to get behind the wheel: the incapacitated or hindered consistently overestimate their own capability. On this issue, the Presidential Commission’s report on the Space Shuttle Challenger accident,[12] is worth a read, not just because its compass includes engineers, managers, and other knowledge workers, but because other structural similarities to a college semester suggest themselves (such as 48-hour weeks during the 18-week period leading to the Challenger accident). The report concludes with a clear, simple statement:
The willingness of NASA employees in general to work excessive hours, while admirable, raises serious questions when it jeopardizes job performance, particularly when critical management decisions are at stake.
Of course, reports like this only get written because workers and managers alike assume that productivity scales linearly with working time. That assumption is at odds with more than a century of data, but deeply comforting to those of us don’t like to acknowledge our own limitations or see some form of nobility in conspicuously ignoring them. For most of us, that assumption is also unlikely to be reality tested by a shuttle explosion or an oil spill, and the short odds are that both the assumption and its derivative habits will persist.
I also have waking nightmares about college governance surrounding a proposal like “faculty should expect three hours of teaching work for every credit hour of load.” But that doesn’t mean it’s impossible to set clear expectations and priorities, or for instructors, courses and assignments with their own time explicitly in mind.
I have yet to encounter a collection of sample assignments, rubrics, or assessment materials that estimate how much time an instructor can expect to commit to them – this despite the fact that many are the products of large-scale norming exercises during which this information would have been relatively easy to gather.[13] The difference between a rubric that takes twenty-five minutes to score and one that takes fifteen is at least as important as any other. Given a fifteen-student class with weekly writing assignments, that difference adds up to one week of instructor time per course. That’s a forty-hour week, by the way. If you work sixty-hour weeks, it’ll probably save you two.
Conclusions
While this doesn’t admit to much useful generalization or to conspicuously innovative thinking, it does admit to several guiding principles:
- It’s easy to dismiss noncompliance. Usually by shifting blame. If students consistently don’t do the reading, the problem is with the reading, the class, or the assignment – not the students. If professors don’t design good reading assignments, the problem is similarly environmental or situational. But it’s easy to pretend otherwise by imagining that because students are responsible for reading (and professors are responsible for designing good assignments), the corresponding responsibilities of the College are somehow reduced.
- Nothing is obvious, especially to vulnerable populations. While the purpose of a reading assignment may be obvious to an instructor, and experienced students may be able to infer it from the focus of the course, inexperienced or mis-experienced students are less likely to make sound inferences. Likewise, new, contingent, or short-term faculty are almost certainly the ones least well-positioned to resist a culture of overwork and end up underperforming as a direct result.
- Widespread practice is the best measure of success. It’s easy to manufacture the illusion of success by pointing to the high end of the bell curve and generalizing backwards. I can always point to the 30% of students who read carefully (just as I can always point to the faculty who design great courses). In my mind, that means I’ve created conditions under which the motivated – or the competent – can succeed. But if systematic improvement is really the goal, I need to focus on the reading that doesn’t get done, the assignments that aren’t designed with attention to clear goals and student workload.
- Don’t be afraid of time tracking. For both individuals and departments, time tracking exercises can be instructive – especially (and probably only) when they’re kept a safe distance from faculty review or evaluation. Sometimes the data itself is revealing (“I never knew I spent so much time on email!”) Other times, simple patterns suggest ways that specific individuals can better meet their goals. For instance, I discovered that I could up my drafting of a project from about 750 words/hour to about 1250 words/hour by writing for one hour first thing in the morning rather than letting my other duties dictate when that writing hour fell.
- You are only as good as your hatchet. A brute reality: if you give people more than they can do (or than they will elect to do), some things will not get done. This error may be compounded by encouraging suboptimal work habits (like long stretches of long hours). Every part of a process is better served in the long term by deciding on clear priorities and choosing what to cut from a project at the earliest reliable indication that it exceeds a team’s capacity. This allows a team to choose which parts of the project will not complete and, often, to structure or compensate for non-completion. Put another way: the first step to doing anything well is deciding what not to do.
In terms of the triple bottom line, the reality seems to be that the only practical method for avoiding overwork is to budget time the way one budgets money: by setting specific, definite limits on the total that can be spent, and applying clearly-ranked priorities from there. Setting goals and working until they are met sounds dedicated (or, as the Presidential Commission on the Challenger Accident called it, “admirable”), but there is nothing either responsible or admirable about realizing avoidable failure.
FOOTNOTES
[1] Opie, L. H. (n.d.). Errors: Incompetence, ineptitude or failure. The Lancet, 362(9385), 731. https://doi.org/10.1016/S0140-6736(03)14235-3.
[2] See Burchfield, C.M., and Sappington, J. Compliance with required reading assignments. Teaching of Psychology, 27(1), 58-60; Gooblar, David. They Haven’t Done the Reading. Again. Vitae. September 24, 2014; Hobson, E.H. (2003, November). Encouraging students to read required course material. Workshop presented at the 28th Annual Conference of the Professional and Organizational Development (POD) Network in Higher Education, Denver, CO.; Marshall, P. How much, how often? College Research Libraries, 35(6), 453-456.
[3] See Richard Haswell’s metastudy, “Average Time-on-Course of a Writing Teacher” (2005).
[4] See Alexander McCormick “It’s about Time: What to Make of Reported Declines in How Much College Students Study.” Liberal Education, 97 (1).
[5] See Barre and Esarey’s Course Workload Estimator (2016), where the “Survey/Understand/Engage” distinctions between reading types, and the table that follows, are presented in fuller detail and with a thorough discussion of supporting research.
[6] See the US Department of Education’s Program Integrity Questions and Answers.
[7] See Alexander McCormick “It’s about Time: What to Make of Reported Declines in How Much College Students Study,” Liberal Education, 97 (1).
[8] See Barre and Esarey’s Course Workload Estimator, where this table, and the “Survey/Understand/Engage” distinctions between reading types, are presented in fuller detail and with a thorough discussion of supporting research.
[9] See Richard Haswell’s study, “Average Time-on-Course of a Writing Teacher” (2005).
[10] Robinson, Evan. “Why Crunch Modes Doesn’t Work: Six Lessons.” International Game Developer’s Association.. For the earliest modern study of this type, see Münsterberg, Hugo. “Psychology and Industrial Efficiency.” Classics in the History of Psychology.
[11] See, for instance, “How Much Does Overtime Really Cost,” Mechanical Contractors Association of America, Bulletin No. 18A, January 1968; “Scheduled Overtime Effect on Construction Projects: A Construction Industry Cost Effectiveness Task Force Report,” The Business Roundtable, New York, November 1980; “Recommendations for NRC Policy on Shift Scheduling and Overtime at Nuclear Power Plants,” U. S. Nuclear Regulatory Commission Report NUREG/CR-4248, PNL-5435, July 1985; and T. Amagasa and Nakayama, T. “Relationship between long working hours and depression in two working populations: a structural equation model approach.” J Occup Environ Med. 2012 Jul;54(7):868-74. doi: 10.1097/JOM.0b013e318250ca00.
[12] Volume 2, Appendix G, “Human Factor Analysis.”
[13] For one example of such a study (which includes useful assignment/rubric pair), see Pagano, et al. “An inter-institutional model for college writing assessment.” College Composition and Communication, 60(2).