Opinion
Assessment Opinion

Habits of Success: Seeking the Invisible Thread

By Contributing Blogger — October 23, 2015 15 min read
  • Save to favorites
  • Print

This post is by Adam Carter, chief academic officer for Summit Public Schools.

There are two adages you’ll hear regularly at Summit Public Schools:


  • Assess what you value; value what you assess.
  • Not everything that can be counted counts, and not everything that counts can be counted.

We owe these to great minds in education and science. And because these statements can, at times, offer contradictory courses of action, we must attend to the words of yet another famous thinker, who wrote that “The test of a first-rate intelligence is the ability to hold two opposing ideas in mind at the same time and still retain the ability to function.”

And here, at the junction of what we can assess and what we value, is where we often find ourselves when discussing the measurement of one of the cornerstones of our academic model: Habits of Success.

In my first blog in the Learning Deeply series, I introduced Summit’s outcomes-oriented approach to K-12 education. Instead of talking first about master schedules, course offerings, and curricula, we ground ourselves in our mission:

To prepare a diverse student population for success in a four-year university, and to be thoughtful, contributing members of society.

We then question every word: what do we mean by diverse? What is success? Why limit ourselves to four-year university admission? The answers to these questions are not obvious, nor are they static.

Once we’re clear on our operational definitions, we come to a set of interrelated questions, such as:


  • What are the primary drivers of college readiness?
  • How do we assess what we value in ways that are empowering and motivating for our students?
  • How do we ensure that we offer students the supports and materials that they need to be successful?
  • How do we most effectively and efficiently organize our limited resources to move students intentionally towards success?
  • How do we make sure we’re continually improving?

We believe that the primary drivers of college readiness are these:

Let’s answer the remaining questions for each:

Driver: Cognitive Skills.

Driver: Content Knowledge


  • How do we assess it? In a binary fashion. We offer students a progression of 10 question, on-demand content assessments. These are organized by objectives and linked to prevailing standards frameworks (CCSS, NGSS, ACTFL, AP). Students who pass a content assessment do so with a score of an “8" or above.
  • How do we empower students to learn it? In Personalized Learning Time (PLT). Students have access to multi-modal playlist resources, curated by teachers, which allows them to develop content expertise within and beyond the school day. Students may also use peers, content experts, and mentors to support their acquisition of content knowledge via playlists.
  • How do we organize our resources? On day 1 at Summit, every student knows every bit of content knowledge required to pass every course. And not just every course in her grade, or her course, but every course in every discipline, grades 6-12. Using standards as a guide, we have developed competency-based pathways to support students in learning, and in demonstrating mastery of content knowledge.

Driver: Expeditions


  • How do we assess it? In two ways: 1, Grades. Some courses lend themselves to an A-F grading scheme. These courses include all college credit eligible courses. 2, Badges, or pass/fail. Some electives are best assessed by the demonstration of core competencies. Yoga is a valuable practice, and can be assessed by a student’s commitment to and demonstrated mastery of defined concepts and skills.
  • How do we empower students to learn it? Students have eight dedicated weeks of Expeditionary Learning at Summit. These expeditions offer perspective-changing and passions-igniting experiences for students, and do the same for core teachers, who have these eight weeks to engage in rich professional development modules. Our Expeditions team rotates from school to school to offer only the highest quality courses.
  • How do we organize our resources? By creating immersive experiences for learning the arts, future planning, STEM, wellbeing, and leadership, students are able to conduct deep-dives into topics that, time again, result in changes in direction and clarity of focus. Based in research (e.g., Levine) on the need for exploring passions and purpose, Expeditions offers students the opportunity for community internships, service learning, electives, education through travel, and myriad courses to match interests.

Driver: Habits of Success


  • How do we assess it? We do not systematically assess Habits of Success. We have piloted a range of promising change ideas, but none have been institutionalized.
  • How do we empower students to learn it? In Community Time, PLT, and 1:1 Mentoring. All students have daily interactions focused solely on Habits of Success.
  • How do we organize our resources? In SY2013-14, we employed a “Non-Cognitive Skills Specialist,” who curated resources, developed curricula, and led professional development. Additionally, we dedicate 10 minutes of 1:1 mentoring per student during each school week, have a “Student Success Class” during daily PLT aimed at promoting high leverage learning strategies, and offer a 60-75 minute block on Fridays dedicated to whole-group development of Habits of Success during Community Time.

At this moment, any Summit faculty member can tell you where every Summit student has areas of strength and areas weakness in content knowledge and cognitive skills. We track the expeditionary learning experiences students choose. These elements of college readiness count, and so we count them. And we count them daily, make that data accessible to parents, teachers, mentors, administrators and, most importantly, students themselves.

Habits of Success are really important--as educators, parents, and community members, we know that they are the invisible thread that ties together the fabric of relationships and organizations--they are bound intimately with motivation and achievement. They count, but we don’t yet count them. We should. Here’s why, in the words of Angela Duckworth and David Yeager:

Non-cognitive qualities are diverse and collectively facilitate goal-directed effort, healthy social relationships, and sound judgment and decision making...such qualities powerfully predict academic, economic, social, psychological, and physical well-being... The field urgently requires much greater clarity about how well, at present, it is able to count some of the things that count.

Common Language

It is very difficult to actively develop qualities that you cannot name. It is impossible to do so at scale. So we need a common language. To this end, we are unable to do as we did with cognitive skills and content knowledge by drawing from leading research--there is no Common Core for Habits of Success. There are not state standards. There is no consensus in the field. Some researchers would take issue with the phrase Habits of Success altogether. As practitioners, we are left to draw these lines without clear guidance from a plurality of the research community.

The lines we have chosen to draw are based on what research is available:


  1. Emotional Intelligence (the CASEL framework)
  2. Self-Directed Learning Behaviors (based on the work of Yeager, Dweck, Hulleman)
  3. Self-Directed Learning Cycle (reflect - goal - plan - learn - show)
  4. Learning Strategies (e.g., note taking)
  5. Growth mindset (Dweck)
  6. School & Classroom Culture (aligned with core characteristics of respect, responsibility, courage, compassion, curiosity, integrity)

This remains a sprawling list of processes, frameworks, and lists, but it gives us a small toehold into this slippery endeavor.

Assessment Instruments

We also suspect that a portfolio approach to assessing these Habits of Success is possible, and we have laid much of the groundwork to collect such assessment information. For instance:

Emotional Intelligence: Likert Scales. Students self-assess on the dimensions of Emotional Intelligence using the Likert Scale (1=Strongly Disagree; 5=Strongly Agree). Mentors also assess students using the scale. These assessments are put next to each other and become the foundation of a mentoring conversation.

Note: When we have piloted this approach, calibration against the Likert Scale is not sufficient to yield reliable, valid assessment data; however, the Likert scale ratings were effective at spurring reflective conversations between mentor and mentee. They were good opportunities for learning and developing goals around EI together. They also offer interesting data sets to analyze; for instance, when we put student self-assessments beside their grades and their mentors’ assessments, we found a high correlation between students with high self-perception relative to mentor perception and low grades (particularly “C"s and below). Students with “A"s typically had a self-perception of EI that matched, or even lagged below, that of their mentors.

Self-Directed Learning (SDL) Behaviors: Back-end data analysis in partnership with SRI. To this point, the most interesting assessment information we have about students’ self-directed learning behaviors comes from an in-depth analysis that we have conducted with SRI to determine the SDL profiles of students.

Note: This research is ongoing, and built on the rich data sets we have because all Summit students, as well as our Basecamp partners across the country, use a single learning platform (the Personalized Learning Platform) that promotes self-directed learning. One promising aspect of this research is the ability to identify productive (and unproductive) learning patterns algorithmically and trigger immediate interventions. Building targeted surveys into the platform will also develop evolving pictures of students’ self-reported mindsets beside their academic performance, offering the potential for ever more focused in-person and in-platform interventions.

Self-Directed Learning Cycle: Quantity of:


  • Goals set
  • Goals accomplished
  • Reflections logged
  • Length of reflections
  • Plans completed
  • Work submitted on time

based on data pulled from the Personalized Learning Platform.

The more interesting--and much more difficult-to-gather--data comes from assessing the quality of students’ reflections and goals. We have not yet systematically collected quality information as a network, but are excited to move in that direction--a hefty logistical challenge.

Learning Strategies: Badged Performance Tasks. We piloted badging in our 2015 summer course “You’re in the driver’s seat.” We also developed short instructional videos around a set of research-based, high leverage learning strategies in partnership with GreatSchools.

Badging is now built into our Personalized Learning Plan (PLP) platform. We have not effectively normed as a community to badge learning strategies with the calibration and rigor required to roll out learning strategy badges across our network. Ideally, badges will:


  • Make it clear that we value learning strategies
  • Offer students and teachers the support they need to learn and teach these strategies effectively in PLT
  • Provide logical incentives for students who demonstrate fluency and competence in high leverage learning strategies
  • Create a springboard for authentic student-to-student collaboration and tutoring

Reframe mentoring conversations from “do you know how to...?” to “I know you can...so why didn’t you do it here?”

Growth Mindset: Student Agency Improvement Community Survey. Along with other members of Carnegie’s SAIC, we’re piloting this survey with multiple administrations across site throughout the year.

School and Classroom Culture: Surveys (YouthTruth and a homegrown survey). Research has, time and again, indicated that students are adept at assessing the learning environment, particularly along the dimensions of:


  • Does this environment support my learning?
  • Do the adults in this place care about me?
  • Do my teachers/mentors organize learning effectively?

If you want to read more, look at the Measures of Effective Teaching (MET) study and easy-to-read books such as Daniel T. Willingham’s Why Don’t Students Like School?

We’re also actively engaged in multiple pilots across the network, from trying this interesting approach involving daily “mood checks,” used at Alpha Public Schools in San Jose, to using mental contrasting to support students in making realistic plans based on their aspirations.

What next?

So now we’re left with a lot of potential for common language and some reasonable hunches about how to develop a diverse portfolio of assessments to support the development of these Habits of Success. We also have a sense of how we would use these assessments to support students and improve our programs.

But because we don’t have a research consensus, we’re moving slowly. And, operationally, we’re concerned with questions such as:


  • Is our best stab at common language simply too sprawling to be implementable for our faculty, or useful for our students?
  • How do we ensure that we “do no harm” at all times? We want to best support students, but we don’t want to create perverse incentives.
  • Where should we be “tight,” and where “loose”? Clearly, nobody has this fully figured out, and we want to create room for innovation, but innovation requires a disciplined process, and a set of measures of its own.
  • How does a portfolio approach really play out? So many measures honors the complexity of the work, but could lead to rudderlessness or inaction.
  • By setting aside time for assessment of these Habits of Success, are we taking away time that could be better used by faculty in fostering them with students?
  • Even if we have a common language and common assessments, how do those assessments lead to values-aligned action in the best interests of students? We know that we don’t want to use these assessments to evaluate teachers, to draw definitive conclusions about students, or to promote or hold back students, but without any direct consequences (positive or negative), does a formal assessment approach make sense, anyway, particularly when the stakes are baked into our existing grades, and into life, generally?
  • Finally, without broad-based consensus by practitioners or researchers, and with plenty of doubts, how comfortable are we going to be saying to our faculty: “This is our best thinking. It’s only going to work if we act in unison. So, here’s why working together to define and measure Habits of Success is essential. Let’s understand this language, let’s agree to these assessments, and let’s go with it”?

To the last question, I’m not comfortable. But, maybe it’s time to add another adage to our ever-growing list: “Nothing in the world is worth...doing unless it means effort, pain, and difficulty.” I look forward to sharing what we learn as we do the uncomfortable work of teasing apart and counting the heretofore invisible threads of Habits of Success. See the post-script for the nitty-gritty ideas we’re hoping to test soon, and to share in future blog posts.

Post-Script: What we can do now

Emotional Intelligence:


  1. Have students assess themselves, using the Likert Scale on all dimensions of Emotional Intelligence. Include brief explanations.
  2. Have mentors do the same for mentees.
  3. Put those reports against academic progress.
  4. Visualize it for students, teachers, mentors, parents, and administrators.
  5. Discuss trends, questions, and possible actions.
  6. Take action and document movement.
  7. Repeat quarterly, and build this process into our Community Time curriculum and our data cycles.

Self-Directed Learning Behaviors:


  1. Continue the work with SRI until we have found online behavioral profiles that appear especially effective and efficient.
  2. Work with the research community to determine mindset interventions that address the underlying issues that result in the demonstrated online behaviors.
  3. Track changes in behavior after these mindset interventions are established.
  4. For the successful interventions, ask, “is this something that we can effectively do within the platform, or does it require human intervention?”
  5. If the intervention can be applied effectively in-platform, build it. If only in person, develop professional development to support its administration.

Self-Directed Learning Cycle


  1. Track and report quantity.
  2. Put these numbers against academic performance (achievement and growth).
  3. Analyze trends.
  4. Conduct tests, in schools/classrooms that are amenable, for quality checking various elements of the SDL cycle.
  5. Determine the most efficient and reliable methods for conducting quality checks.
  6. Take aligned action based on the results of these quality checks.
  7. Determine whether or not the return on investment is high enough to warrant continued work in this area.

Learning Strategies


  1. Develop 3-5 badges for the highest-leverage learning strategies, preferably in partnership with a motivated site or grade-level team.
  2. Work intensively with the site or grade-level team to assess these learning strategies and adapt Personalized Learning Time as a result.
  3. Share learnings with others, and codify learnings both in the PLP and in professional development modules.

Growth Mindset and School & Classroom Culture


  1. Administer the relevant survey/s on each site.
  2. Discuss and evaluate the results as a faculty.
  3. Develop action plans.

The opinions expressed in Learning Deeply are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.