Accreditation and Assessment | Salem State University Skip to main content
Title

Accreditation

Council on Social Work Education Logo

The Council on Social Work Education (CSWE) Educational Policy and Accreditation Standards (EPAS) promotes academic excellence in baccalaureate and master’s social work education. EPAS describe four features of an integrated curriculum design:

  • Program Mission and Goals
  • Explicit Curriculum
  • Implicit Curriculum
  • Assessment

Assessment

  • Assessment Methodology

    Both of Salem State’s Bachelor’s (BSW) and Master’s (MSW) in Social Work programs collect assessment data to help determine whether our students are achieving benchmark scores in the various competency areas for social work practice. Our approach was modeled after Stephen Holloway’s (2013) Some Suggestions on Educational Program Assessment and Continuous Improvement for the 2008 EPAS. However, our rate calculation methods differ from the approach stated there. The assessment process is conducted by the School of Social Work’s Assessment Task Force (ATF), consisting of members from our BSW and MSW programs.

    The present report details the data collection and data consideration processes we engaged in between fall 2015 and spring 2017. Data collection took place in 2016 and 2017 spring semesters for academic years 2015-2016 and 2016-2017, respectively. In order to fully digest these data, ongoing data consideration by all faculty members and staff took place starting in fall 2016.

    Description of Data Collection Process

    The School of Social Work’s ATF set out to develop an assessment process that was embedded in the school’s work process in order to foster a culture of continuous quality improvement. Each year, the ATF engages in a two-part data collection effort designed to yield two measures per competency. This data collection process takes place in April of each year. First, through the use of an automated data collection system, the school’s Department of Field Education collects data from field instructors about their students. Field instructors are given guidance on how to use the scoring approach in evaluating their student interns. For this portion of our assessment process, we used the EPAS 2015 due to our field education department’s commitment to use this version of EPAS along with our other sister schools in New England.

    Second, through the use of an online survey, the BSW and MSW Program Coordinators facilitate data collection from all graduating students. For this portion of the assessment process, we used EPAS 2008 as we already had a data collection system that used this version of EPAS from the previous year, which we needed for comparability. Additionally, a set of survey questions focused on learning more about students’ experiences with the explicit and implicit curriculum. In assessing the competencies, students were asked to rate themselves on the practice behaviors at the time they filled out the survey at the end of their senior year, with their field internship just finishing. Collection of the student survey data is usually achieved by designating time for survey completion in BSW and MSW students’ field seminars.

    Through the use of this dual approach, two measures are gathered for each competency, one of which is based on the demonstration of the competency in a real practice situation. Each competency is derived from an average of the related practice behavior scores, which are all weighted equally.

    Determination of Benchmarks

    Benchmarks are determined for each competency by the faculty members of the BSW program along with members of the Department of Field Education. These stakeholders determined a rationale for using 3.00 as a benchmark for each measure based on our previous, lower benchmark and the fact that curricular changes had been made to address low scores in previous years. The BSW and MSW program faculty and field education staff determined that a goal of 80% was ideal. In order to determine whether a students’ performance meets the benchmark, field instructors rate students’ performance in field, and students’ rate their own views on their competencies in practice through self-assessment. Data are analyzed through the use of sorting in a spreadsheet in order to determine the percentage of students who met or exceeded the benchmark score (see Table 1, which includes an example of how calculations are achieved). We diverted from Holloway’s (2013) approach as his suggested calculation approach was mathematically incorrect.

Back to top