0% found this document useful (0 votes)
53 views6 pages

2012 Ccsse Benchmarks How Benchmarks Are Calculated

The document explains how CCSSE benchmarks are calculated, focusing on institutional practices and student behaviors that enhance engagement and learning. It outlines the two types of benchmark scores: raw and standardized, detailing the processes for creating these scores, including reverse coding and rescaling. The document also describes the specific benchmarks related to active learning, academic challenge, student effort, student-faculty interaction, and support for learners.

Uploaded by

Jp Xtyrael
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views6 pages

2012 Ccsse Benchmarks How Benchmarks Are Calculated

The document explains how CCSSE benchmarks are calculated, focusing on institutional practices and student behaviors that enhance engagement and learning. It outlines the two types of benchmark scores: raw and standardized, detailing the processes for creating these scores, including reverse coding and rescaling. The document also describes the specific benchmarks related to active learning, academic challenge, student effort, student-faculty interaction, and support for learners.

Uploaded by

Jp Xtyrael
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

How Benchmarks are Calculated

CCSSE benchmarks are groups of conceptually related survey items that focus on institutional practices
and student behaviors that promote student engagement—and that are positively related to student
learning and persistence. The five CCSSE benchmarks are active and collaborative learning, academic
challenge, student effort, student-faculty interaction, and support for learners. Two types of benchmark
scores are included in each college’s data set: raw benchmark scores and standardized benchmark
scores for each respondent.

Standardized benchmark scores are useful for comparing one college to a comparison group of colleges
(e.g., other colleges of a similar size) or the three-year cohort at any one point in time. Standardized
benchmark scores can also be used to determine how well subgroups within the college are doing
relative to other subgroups, such as developmental and non-developmental students.

Raw benchmark scores are the appropriate measures to use for colleges that wish to conduct longitudinal
trend analyses. Standardized benchmark scores are not appropriate for longitudinal analysis as they are
recalculated every year and are based on the distribution of responses for each annual three-year
cohort. Raw benchmark scores, on the other hand, are not affected by fluctuations in the distribution of
national responses from year to year.

The creation of both types of benchmark scores involves reverse coding items where necessary and
converting all responses to the same scale. After these initial steps are taken, raw benchmark scores are
computed for each respondent.

Raw benchmark scores are computed by averaging the rescaled scores of their related survey
items. Benchmark scores are then standardized around the mean of CCSSE Cohort respondents’ scores
so that benchmarks have a mean of 50, a standard deviation of 25, and are weighted by full-time and less
than full-time enrollment status. A standard deviation of 25 is used to ensure that over 95% of benchmark
scores fall between zero and 100, providing an understandable scale for member colleges. Then, using
the raw benchmark scores, standardized benchmark scores are computed for each respondent.

Please note that individual colleges cannot compute standardized benchmark scores as this process can
only be completed using the full three-year cohort data set. However, standardized benchmark and raw
benchmark scores for each student record are included in each college’s data file. College, campus, and
group-level benchmarks can be calculated by computing the average of the individual benchmark scores,
either raw or standardized. The steps used to create the benchmark scores are explained in more detail
below.

Creating Benchmark Scores

1. Reverse code items where necessary.


The first step is to determine which items, if any, need to be reverse coded so that a high score on the
item represents a desirable behavior. For example, Item 4e—Frequency: Come to class without
completing readings or assignments is originally coded such that 1=never and 4=four or more times. In
this case, never should have a higher positive impact on the benchmark score than coming to class
unprepared four or more times. The easiest way to reverse code this item is to use the following formula
that assumes, as in the case of Item 4e, the item has four response options:

ReverseScore = 5 – OriginalScore.

Published by the Center for Community College Student Engagement How Benchmarks are Calculated
© 2012 Permission granted for unlimited copying with appropriate citation Page 1 of 6
For Item 4e, the reverse codes would be: CLUNPREP_Rev = 5 – CLUNPREP.

4 = 5-1 (4 becomes the value for never)


3 = 5-2 (3 becomes the value for once)
2 = 5-3 (2 becomes the value for two or three times)
1 = 5-4 (1 becomes the value for four or more times)

(NOTE: Item 4e is the only item that is reverse coded on CCSSE.)

2. Convert all items to a common scale with a range of 0-1.


After reverse coding item 4e, the next step is to convert all items to a common 0 to 1 (zero to one)
scale. The following formula is used to accomplish this conversion:

RescaledScore = (OriginalScore - 1) / (max_response_value - 1).

Using Item 4e as an example again, where the original variable name is CLUNPREP, the formula would
be:

CLUNPREP_RevRaw = (CLUNPREP_Rev – 1) / 3
0.00 = (1 – 1) / 3
0.33 = (2 – 1) / 3
0.66 = (3 – 1) / 3
1.00 = (4 – 1) / 3

(NOTE: Remember when working with the reverse-coded items to use the reverse-coded variable in this
step.)

3. Create raw benchmark scores.


Calculation of the raw benchmark scores use the newly-created rescaled (0–1) variables. The raw
benchmark scores are created by calculating the average score of the items that compose the
benchmark. Using the Active and Collaborative Learning (ACTCOLL) benchmark as an example and the
item numbers from the survey as variable names [the variable names are enclosed in square brackets],
the formula for computing the raw benchmark score is:

ACTCOLL_Raw = (4a[clquest] + 4b[clpresen] + 4f[classgrp] + 4g[occgrp] + 4h[tutor] +


4i[commproj] + 4r[oocideas]) / 7.

4. Compute standardized benchmark scores.


Before explaining this step, it is important to reiterate that standardized benchmark scores cannot be
computed without having the entire cohort data set (all respondents included in the three-year cohort). As
such, this step is only briefly explained.

The Center uses the STANDARD procedure in SAS to create the standardized benchmark scores across
the three-year cohort so the average benchmark is 50 with a standard deviation of 25 at the student
record level. To account for the inherent sampling bias, this calculation includes weights, the utilization of
which is explained in the next step.

5. Compute group-level benchmark scores.


The process explained above creates benchmark scores (raw and standardized) for every respondent in
the primary sample. The process for creating group-level (e.g., male and female) benchmark scores is the
same as for both raw and standardized benchmarks. In most circumstances, the group-level benchmarks
are created by calculating the weighted average of a benchmark variable for the members of the group.
Sampling for CCSSE is done at the class level and, as such, full-time students are more likely to be
included in the sample than less than full-time students because full-time students take more classes. To
account for this sampling bias, most analyses, including the computation of group-level benchmark
scores, must incorporate weights so the results are more representative of the actual distribution of

Published by the Center for Community College Student Engagement How Benchmarks are Calculated
© 2012 Permission granted for unlimited copying with appropriate citation Page 2 of 6
students at a given college. Weighting is not employed when groups are formed based on enrollment
status (less than full-time and full-time).

See “When to Use Weights” for a more detailed discussion of using weights in analyzing CCSSE data.

(NOTE: Standardized benchmark scores are not created for oversample respondents.)

Computing the Five CCSSE Benchmark Scores

This section describes in detail the standard process for calculating individual respondent-level
benchmark scores, which involves:

1. Reverse coding items (where applicable)


2. Converting scores on benchmark items to a common scale with a range of 0 –1 (zero to one)
3. Computing the benchmark score
4. Computing group-level benchmark scores
a. Raw benchmark scores
b. Standardized benchmark scores

Active and Collaborative Learning (7 items: 4a, 4b, 4f, 4g, 4h, 4i, and 4r)

The Active and Collaborative Learning benchmark does not include any items that require reverse coding,
so the first step above is not applicable.

The process for converting the original scale for each item to a 0–1 scale is the same as described
above, varying only by the number of response options for any given item. The math for converting each
item is presented below.

Q4a (4-point scale, 1-4): clquest_raw = (clquest – 1) / 3


Q4b (4-point scale, 1-4): clpresen_raw = (clpresen – 1) / 3
Q4f (4-point scale, 1-4): classgrp_raw = (classgrp – 1) / 3
Q4g (4-point scale, 1-4): occgrp_raw = (occgrp – 1) / 3
Q4h (4-point scale, 1-4): tutor_raw = (tutor – 1) / 3
Q4i (4-point scale, 1-4): commproj_raw = (commproj - 1) /3
Q4i (4-point scale, 1-4): oocideas_raw = (oocideas - 1) / 3

The new rescaled variables can now be used to calculate the raw individual-level benchmark
scores. This is simply a matter of computing the average of the seven rescaled items:

ACTCOLL = (clquest_raw + clpresen_raw + classgrp_raw + occgrp_raw + tutor_raw +


commproj_raw + oocideas_raw) / 7

The final step is creating the raw benchmark score for a given population subgroup. This is accomplished
by computing the weighted average of the raw benchmark score (ACTCOLL) for all respondents in the
subgroup of interest.

The raw benchmark variable for the Active and Collaborative Learning benchmark is ACTCOLL and the
standardized benchmark variable is ACTCOLL_STD. Computation of a population subgroup
standardized benchmark score follows the same procedure just described for the raw subgroup
benchmark score substituting ACTCOLL_STD for ACTCOLL.

Student Effort (8 items: 4c, 4d, 4e, 6b, 10a, 13d1, 13e1, and 13h1)

The Student Effort benchmark contains one item (4e) that requires reverse coding. This item is reverse
coded using the following process:

Published by the Center for Community College Student Engagement How Benchmarks are Calculated
© 2012 Permission granted for unlimited copying with appropriate citation Page 3 of 6
Q4e (4-point scale, 1-4): clunprep_rev = (5 – clunprep)

The process for converting the original scale for each item to a 0–1 scale is the same as described
above, varying only by the number of response options for any given item. The math for converting each
item is presented below.

Q4c (4-point scale, 1-4): rewropap_raw = (rewropap – 1) / 3


Q4d (4-point scale, 1-4): integrat_raw = (integrat – 1) / 3
Q4e (4-point scale, 1-4): clunprep_revraw = (clunprep_rev – 1) / 3
Q6b (5-point scale, 1-5): readown_raw = (readown – 1) / 3
a
Q10a (6-point scale, 0-5 ): acadpr01_raw = (acadpr01) / 5
a
Q13d1 (4-point scale, 0-3 ): usetutor_raw = (usetutor) / 3
a
Q13e1 (4-point scale, 0-3 ): uselab_raw = (uselab) / 3
a
Q13h1 (4-point scale, 0-3 ): usecomlb_raw = (usecomlb) / 3

a
NOTE: The lowest value on the original scale is zero, so we do not need to subtract 1 from the
original scale prior to division to create the 0-1 rescaled variable.

The new rescaled variables can now be used to calculate the raw individual-level benchmark
scores. This is simply a matter of computing the average of the eight rescaled items:

STUEFF = (rewropap_raw + integrat_raw + clunprep_revraw + readown_raw + acadpr01_raw


+ usetutor_raw + uselab_raw + usecomlb_raw) / 8

The final step is creating the raw benchmark score for a given population subgroup. This is accomplished
by computing the weighted average of the raw benchmark score (STUEFF) for all respondents in the
subgroup of interest.

The raw benchmark variable for the Student Effort benchmark is STUEFF, and the standardized
benchmark variable is STUEFF_STD. Computation of a subgroup standardized benchmark score follows
the same procedure as just described for the raw subgroup benchmark score substituting STUEFF_STD
for STUEFF.

Academic Challenge (10 items: 5b, 5c, 5d, 5e, 5f, 6a, 6c, 7, 9a, and 4p)

The Academic Challenge benchmark does not include any items requiring reverse coding, so the first
step above is not applicable.

The process for converting the original scale for each item to a 0–1 scale is the same as described
above, varying only by the number of response options for any given item. The math for converting each
item is presented below.

Q5b (4-point scale, 1-4): analyze_raw = (analyze – 1) / 3


Q5c (4-point scale, 1-4): synthesz_raw = (synthesz – 1) / 3
Q5d (4-point scale, 1-4): evaluate_raw = (evaluate – 1) / 3
Q5e (4-point scale, 1-4): applying_raw = (applying – 1) / 3
Q5f (4-point scale, 1-4): perform_raw = (perform – 1) / 3
Q6a (5-point scale, 1-5): readasgn_raw = (readasgn – 1) / 4
Q6c (5-point scale, 1-5): writeany_raw = (writeany – 1) / 4
Q7 (7-point scale, 1-7): exams_raw = (exams – 1) / 6
Q9a (4-point scale, 1-4): envschol_raw = (envschol – 1) / 3
Q4p (4-point scale, 1-4): workhard_raw = (workhard – 1) / 3

The new rescaled variables can now be used to calculate the raw individual-level benchmark
scores. This is simply a matter of computing the average of the ten rescaled items:

Published by the Center for Community College Student Engagement How Benchmarks are Calculated
© 2012 Permission granted for unlimited copying with appropriate citation Page 4 of 6
ACCHALL = (analyze_raw + synthesz_raw + evaluate_raw + applying_raw + perform_raw +
readasgn_raw + writeany_raw + exams_raw + envschol_raw + workhard_raw) /10

The final step is creating the raw benchmark score for a given population subgroup. This is accomplished
by computing the weighted average of the raw benchmark score (ACCHALL) for all respondents in the
subgroup of interest.

The raw benchmark variable for the Academic Challenge benchmark is ACCHALL and the standardized
benchmark variable is ACCHALL_STD. Computation of a population subgroup standardized benchmark
score follows the same procedure as just described for the raw subgroup population benchmark score
substituting ACCHALL_STD for ACCHALL.

Student-Faculty Interaction (6 items: 4k, 4l, 4m, 4n, 4o, and 4q)

The Student-Faculty Interaction benchmark does not include any items requiring reverse coding, so the
first step above is not applicable.

The process for converting the original scale for each item to a 0–1 scale is the same as described
above, varying only by the number of response options for any given item. The math for converting each
item is presented below.

Q4k (4-point scale, 1-4): email_raw = (email - 1) / 3


Q4l (4-point scale, 1-4): facgrade_raw = (facgrade - 1) / 3
Q4m (4-point scale, 1-4): facplans_raw = (facplans - 1) / 3
Q4n (4-point scale, 1-4): facideas_raw = (facideas – 1) / 3
Q4o (4-point scale, 1-4): facfeed_raw = (facfeed – 1) / 3
Q4q (4-point scale, 1-4): facoth_raw = (facoth – 1) / 3

The new rescaled variables can now be used to calculate the raw individual-level benchmark
scores. This is simply a matter of computing the average of the six rescaled items in this scale:

STUFAC = (email_raw + facgrade_raw + facplans_raw + facideas_raw + facfeed_raw + facoth_raw) / 6

The final step is creating the raw benchmark score for a given population subgroup. This is accomplished
by computing the weighted average of the raw benchmark score (STUFAC) for all respondents in the
subgroup of interest.

The raw benchmark variable for the Student-Faculty Interaction benchmark is STUFAC and the
standardized benchmark variable is STUFAC_STD. Computation of a population subgroup standardized
benchmark score follows the same procedure as just described for the raw subgroup population
benchmark score substituting STUFAC_STD for STUFAC.

Support for Learners (7 items: 9b, 9c, 9d, 9e, 9f, 13a1, and 13b1)

The Support for Learners benchmark does not include any items that require reverse coding, so the first
step above is not applicable.

The process for converting the original scale for each item to a 0–1 scale is the same as described
above, varying only by the number of response options for any given item. The math for converting each
item is presented below.

Q9b (4-point scale, 1-4): envsuprt_raw = (envsuprt – 1) / 3


Q9c (4-point scale, 1-4): envdivrs_raw = (envdivrs – 1) / 3
Q9d (4-point scale, 1-4): envnacad_raw = (envnacad – 1) / 3
Q9e (4-point scale, 1-4): envsocal_raw = (envsocal – 1) / 3

Published by the Center for Community College Student Engagement How Benchmarks are Calculated
© 2012 Permission granted for unlimited copying with appropriate citation Page 5 of 6
Q9f (4-point scale, 1-4): finsupp_raw = (finsupp – 1) / 3
a
Q13a1 (4-point scale, 0-3 ): useacad_raw = (useacad) / 3
a
Q13b1 (4-point scale, 0-3 ): usecacou_raw = (usecacou) / 3

a
NOTE: The lowest value on the original scale is zero, so we do not need to subtract 1 from the
original scale prior to division to create the 0-1 rescaled variable.

The new rescaled variables can now be used to calculate the raw individual-level benchmark
scores. This is simply a matter of computing the average of the seven rescaled items:

SUPPORT = (envsuprt_raw + endivrs_raw + envnacad_raw + envsocal_raw + finsupp_raw +


useacad_raw + usecacou_raw) / 7

The final step is creating the raw benchmark score for a given population subgroup. This is accomplished
by computing the weighted average of the raw benchmark score (SUPPORT) for all respondents in the
subgroup of interest.

The raw benchmark variable for the Support for Learners benchmark is SUPPORT and the standardized
benchmark variable is SUPPORT_STD. Computation of a population subgroup standardized benchmark
score follows the same procedure as just described for the raw subgroup population benchmark score
substituting SUPPORT_STD for SUPPORT.

When to Use Weights

In the CCSSE sampling procedure, students are sampled at the classroom level. As a result, full-time
students, who by definition are enrolled in more classes than less than full-time students, are more likely
to be sampled. To adjust for this sampling bias, CCSSE results are weighted using the most recently
available IPEDS data. College data sets include a variable called IWEIGHT that contains the appropriate
weight for each respondent in the primary sample. This variable is also used in the CCSSE online
reporting system.

Because weights are based on enrollment status, analysis of CCSSE results in which less than full-time
students are in one group and full-time students in another group should not employ weights. Further,
when comparing subgroups broken out by enrollment status (e.g., less than full-time male with less than
full-time female students), weights should not be used. And, when reporting simple demographics (e.g.,
the number of male and female students, number of respondents by race/ethnicity), weights should not
be used.

When comparing all members of one subgroup with members of another subgroup (e.g., all
developmental students with all non-developmental students in which both less than full-time and full-time
students are included in the analysis), weights should be used.

As noted above, weights are determined using the most recent publicly available IPEDS data. As the
publicly available IPEDS data at the time the CCSSE data set is created are approximately three years
old, they may not accurately reflect a college’s current student population. For example, in the case that
a college has experienced a significant change in enrollment characteristics during the three years prior
to administering CCSSE, the college’s institutional research department may want to consider whether
the weights based on IPEDS numbers are completely appropriate.

Another example of when to consider not using weights is when the vast majority of students at the
college are either full-time or less than full-time. As an example, if 92% of students are full-time, a college
may want to look at the unweighted results for full-time students to guide many campus decisions.

Published by the Center for Community College Student Engagement How Benchmarks are Calculated
© 2012 Permission granted for unlimited copying with appropriate citation Page 6 of 6

You might also like