KEEN Assessment Road Map: Review and Next Steps
Blake Hylton, Ohio Northern University
1)Provide better guidance for program assessment.
2)End development of new survey instruments, focusing efforts on validation and adoption of those instruments already in process.
programmatic assessment of KEEN outcomes. This would include a series of illustrative case studies, examining several of the above approaches. These case studies would identify key structural elements of the assessment architecture and review objectives, rubrics, and instruments which are applied across the program. This latter section should not focus on individual instruments/objectives used as much as the selection of instruments/objectives and design of the complete architecture. A recent VentureWell blog posting by Gary Lichtenstein and Sarah Zappe provides an early road map for how such a guide might be organized.
Deployment of Indirect (e.g. Survey, Reflection) Assessment
Current State: Direct assessment has been and continues to
be a significant challenge for the network as a whole. Direct assessment
tools that have been published include the rubrics underpinning the
artifact assessment mechanism leveraged by Rose-Hulman, mentioned
previously. There are also rubrics, such as those developed by Lawrence
Tech in 2016 for their Design Studio course, which are very
course/assignment specific and not as explicitly mapped to the current mindset framework. Others, such as those developed by ONU in 2015 or by Villanova also in 2015, predate the full adoption of the current 3C’s framework and need to be revisited within this new assessment framework. Rubrics appear to be quite wide-spread across the network – the aforementioned Working Group survey found 16 of 22 responding institutions to be using rubrics for assessment. Of these, however, almost none are present in the literature or able to be disseminated across the network. In short – previous efforts to develop direct assessment tools and guidance have been cut down prematurely by the rapidly shifting
assessment framework over the last 5-10 years and what direct assessment tools do exist are not widely shared or adopted. However, as the framework itself has appeared to settle to a mostly steady state, it may be time to revisit these efforts.
Investment Opportunities: Best practices and generalized assessment tools contextualized around the KEEN outcomes would be valuable to all network members. This may include, for example, efforts to develop rubrics in the style of the AAC&U VALUE Rubrics which are built around the 3C’s. An effort was undertaken at ONU in the early 2010’s to map existing VALUE Rubrics onto what was the KEEN framework at the time, but that effort did not seek to develop new or reorganized rubrics but rather to identify how existing rubrics might be applied. Further, this work was undertaken while the 3C’s framework was still forming and does not fully map to the currently adopted definitions and does not appear to have been widely adopted. This new development would require some validation and reliability work in addition to dissemination around the network. Similar efforts might seek to develop guidance for rubric development or assessment tips. This could focus on identifying common types of activities used to deploy KEEN outcomes, perhaps through a review of ICE Workshop submissions, and developing guidance for each type. A handbook on direct assessment of KEEN outcomes, similar to or perhaps included within the Program Assessment handbook mentioned previously, might be a valuable deliverable in this area.
KEEN Student Outcome Learning Progression
Current State: There has been, to my knowledge, no effort thus far to identify a learning progression attached to the KEEN Entrepreneurial Mindset. This is most likely the result of the uncertainty that has surrounded the mindset until very recently and the uncertainty surrounding assessment of mindset related outcomes. Additionally, questions abound as to whether the mindset is something that we would be able to effectively impact within the bounds of a single year or even within the entire scope of a college program.
The other body would function as the name working group would imply, fulfilling the second set of goals. This working group would be comprised not of campus representatives but of identified champions across the network, wherever they are to be found, with stated interests in propelling assessment forward. Meetings would be more regular and most often in person, but also perhaps more dissociated as subgroups are identified to tackle specific assessment goals. Goals for the new working group and subgroups might be organized loosely along the five dimensions outlined above.
An Organizing Deliverable: The KEEN Assessment Handbook