The Course Analytics Report provides a portrait of the aggregate activity of all students throughout an entire course. Activity from all the individual reviews completed to date is combined in this report. When a course is first created, the report contains no data because there’s no activity on which to report; as instructors lead students through rounds of the write-review-revise cycles, the report expands.
Course Analytics provides a course-level view of student performance, focused on averages to determine trends; its companion, the Student Analytics Report, provides similar information but looks at the performances of an individual over the duration of the course to compare against class averages.
Once a course has two reviews, an analytics report is available. The course-level analytics report can be accessed by clicking the “Analytics” link in the primary navigation bar under the title for any given course.
Each report contains five sub-sections with unique displays of student engagement data. Each section can be accessed via the report navigation bar.
The report sections include the following:
When using any of the reports, keep in mind the following:
The first section of the Course Analytics report is the Engagement Highlights display. It is designed to provide a quick overview of some of the most important engagement data collected about students. Its two tables offer a glimpse:
The data in the Feedback and Helpfulness table can be sorted by clicking any of the column headers. It can also be downloaded as a CSV file for more advanced searching and sorting. The Engagement by Task Type report provides similar review data but is expanded upon significantly.
These reports consider individual engagement data metrics and graph their performance over time, revealing trends in student performance. The three engagement metrics currently available are:
Each graph is accompanied by a table containing all the raw data that went into plotting it as well as a toolbar enabling a number of different ways to download those materials.
Each point on a graph and each column in the data tables correspond to a single review completed by students in the course. If an instructor has assigned seven reviews, there will be seven points on each graph and seven columns in each table. Hovering the cursor over any data point or column header will reveal the name of that review and its relevant measure; clicking any data point on a chart will reveal the individual report for that review.
This graph is meant to help an instructor see, in general, how much feedback is being exchanged during reviews.
Seeing comment word count plotted in this chart can demonstrate a number of things:
The table below the word count chart chart lists the exact number of comments given by every reviewer who participated in a review as well as the class average for comparison.
This graph is meant to help instructors see, in general, how much feedback each reviewer has given in relation to how much they’ve received. The given/received ratio is calculated by looking at the number of words in comments exchanged during reviews and, for the class average, the individual averages are aggregated into a single score.
The given/received ratio tells us the following:
Ideally, this graph would show a straight line – the closer the ratio is to 1, the more evenly distributed feedback is between writers and reviewers.
Students feel like peer feedback is fair when they get as much feedback as they give, and this data helps instructors monitor fairness. Here are some ways to think about ratios:
Writers are given the option to rate each comment they receive on a scale of 1-5 stars. While it’s up to the individual instructor to coach students to rate their feedback and how to assess it (what a one-star comment looks like compared to a five-star comment), those ratings are compiled in this trend graph.
This chart can help provide answers to questions like:
Common trends: ratings graphs often slope downward over time. In early reviews, writers are eager to rate all of the feedback they receive very highly out of a desire to be friendly and collegial. Over time, as they get better at differentiating helpful from unhelpful feedback, ratings will often decline.
The table below the helpfulness chart lists the helpfulness averages for individual students on the comments they gave as part of a review.
This report is intended to help instructors understand what the performance of an average student looks like in their Eli reviews. When assessing overall performance, knowing what the average student did in a course as well as what kind of feedback they both gave and received can help illustrate where more coaching may be helpful.
Additionally, the data in these tables is intended to help instructors answer larger questions like “how much help did each student receive?” when compiling reports on student performance and engagement.
This report provides instructors with statistics about student performance on the different types of tasks they have assigned to their students.
The table will help an instructor understand, at a class-level, how students engaged with each type of tasks and will help them better understand how one might design new tasks for this class or for future classes.
These features give instructors the ability to download all of their raw student data. While most of this data is accessible through a browser, these tools allow instructors to take the uncooked student data and perform their own queries and sorts with it. This is particularly helpful for teacher researchers investigating their own practice or writing researchers studying student review behaviors.
For a complete breakdown of the data available in these downloads, and about formats and methods for using that data, see the Data Download FAQ. You can also view samples of the data available via download:
There are a few factors to keep in mind about what data is included in these reports: