StevenXClontz’s avatarStevenXClontz’s Twitter Archive—№ 6,804

                            1. Okay, here's a 🧵 on how I've used single-question asychronous quizzes since Fall 2020. @drspoulsen/1452748576052981761
                          1. …in reply to @StevenXClontz
                            First, the course is divided into well-defined learning outcomes, shout outs @Grading4Growth. In @CanvasLms you can represent these as "Outcomes".
                            oh my god twitter doesn’t include alt text from images in their API
                        1. …in reply to @StevenXClontz
                          If you're using a textbook an outcome is approximately a section (in the LA book I maintain with @siwelwerd et al it's exactly a section), but use your judgement - each outcome will have one type of exercise used to measure it.
                      1. …in reply to @StevenXClontz
                        For each outcome I have Class Activities where we explore the topic, then Suggested Homework to practice relevant exercises. Students submit Reflections with at least one of those exercises to make sure we're on the same page, then they can take a Quiz.
                        oh my god twitter doesn’t include alt text from images in their API
                    1. …in reply to @StevenXClontz
                      Each quiz is marked based on my perception of the progress that student has made in learning that learning outcome. See the image below..
                      oh my god twitter doesn’t include alt text from images in their API
                  1. …in reply to @StevenXClontz
                    The quiz/reflection/quiz loop continues until a student successfully "✔️ Meets Expectations".
                1. …in reply to @StevenXClontz
                  At the end of the semester, course grades largely consider how many ✔️s the student has earned. This semester, I've decided to also consider the result of a proctored in-person final exam, along with a Final Reflection where each student makes a case for their letter grade.
              1. …in reply to @StevenXClontz
                The outcome reflections are available once we've wrapped discussion of the outcome in class. Then the quiz is due within 1-2 weeks (really 2-3 weeks before access in Canvas is locked out, but they can request extensions thru end of semester).
            1. …in reply to @StevenXClontz
              I grade all the quiz submissions I have two or three days out of the week. This is pretty quick work; I don't obsess over how many points a question gets, and it's pretty easy to see if a student "gets it" or not. If they get it, then they fix minor errors; else they try again.
          1. …in reply to @StevenXClontz
            So grading isn't bad. But where do all these quizzes come from? Well, I wrote an app for that. checkit.clontz.org/
        1. …in reply to @StevenXClontz
          By authoring CheckIt question templates and the logic generating the random elements for each learning outcome, I can produce (1) a website with suggested homework along with answers, (2) PDF printable quizzes, and (3) export random questions to LMS (Canvas/D2L/Moodle).
      1. …in reply to @StevenXClontz
        Here's an example. teambasedinquirylearning.github.io/checkit-tbil-la/#/bank/V3/1/ So with hundreds of random questions available, each student gets a different version (probabilistically). But they all measure the same thing, and despite random values I really am looking at the explanations students provide.
    1. …in reply to @StevenXClontz
      So @CheckItProblems provides the "final answer" which I can compare with the student's solution, and if those match up (or not) I can follow the logic of their solution to ensure that they not only can produce the right answer, but also show understanding of why it's right.
  1. …in reply to @StevenXClontz
    I could ramble more but I'll end the thread here - let me know if you have questions. @CheckItProblems can be run in a web browser github.com/StevenClontz/checkit-platform/#dashboard to either generate custom random exercises or author new exercises. I hope to make a new release Dec, even easier to use.