Sometimes you come across an idea/process/presentation and it is Ah.Mazing.
Today for me, that presentation was given by Dr. Clifton Franklund from Ferris State University at the 2018 IUPUI Assessment Institute.
Amidst what sounds like a discouraging and chaotic situation on campus, Dr. Franklund has developed a beautiful, intuitive and actually useful method of collecting, analyzing, and disseminating general education data for a campus of around 14 thousand students.
The beauty in the system is its simplicity and leverage of (free) statistical and data tools to create a way of meaningfully engaging faculty (and students) in understanding assessment data and using it to make informed choices.
The entire system was borne from a shift in perspective on the assessment process – rightfully, Dr. Franklund suggests that too often, educators focus all their efforts on “planning” and “collecting” assessment data. Everyone is then too exhausted to spend time learning from the data or engaging in meaningful visualization and interpretation of what the data actually tells us.
Instead, Dr. Franklund turns the focus towards the discussion about what assessment data tell us – whether that leads to specific changes or simply staying the course, the entire process is geared towards faculty conversation about student learning.
If that isn’t the heart of assessment, I don’t know what is.
In the details
I could describe the process used at Ferris State based on the presentation I saw. But the BEST PART is that not only is their new system set up to be transparent on their campus, but it’s totally open source.
You (and I and anyone else) can visit the data, report and all the files and documentation online. Yes, I just said open source and online access to anyone in a sentence about assessment data.
When I live in a world where I feel like we do assessment so we can say we put a report in a drawer, this feels like a breath of fresh air.
The nuts and bolts of the system are this:
Assessment artifacts are classified into one of 14 types. Faculty evaluate work on whatever scale they choose and then translate this score to a rubric score from 0 – 4.
Faculty enter the student IDs and scores into a spreadsheet sent to relevant faculty each semester.
Faculty record the data and send it directly to the database.
The Gen Ed coordinator (Dr. Franklund currently) downloads and compiles the spreadsheets based on learning outcomes. Courses are deidentified into 100 and 200 level by department.
The coordinator analyzes the data using R, interprets the data and write an assessment report.
Discussion prompts are included in the report and a Disqus discussion board is embedded directly into the report.
Faculty have one year to review the report, play with the data if they choose, and discuss the findings.
At the end of the year, the discussion board is closed and the discussion is archived with the report, a beautiful way of closing the loop and promoting institutional memory.
Finally, an innovation
As has been an underlying theme behind some of the presentations today, there’s nothing much novel about the way we do assessment. The problems and solutions identified decades ago are still with us. So, what have we been doing?
Even when I go to a presentation about the experiences on a campus, it still seems like we’re doing the same things that should have been done, I don’t know, a decade ago.
I’m not sure the extent to which I can run with what was presented today, but I am certainly going to be thinking about what he did and percolating ways I can implement the Ferris State model either in the department or in other ways.