اثربخشی مداخلات انتخاب حرفه ای: تکرار و تجزیه و تحلیل متاآنالیز Effectiveness of career choice interventions: A meta-analytic replication and extension
- نوع فایل : کتاب
- زبان : انگلیسی
- ناشر : Elsevier
- چاپ و سال / کشور: 2017
توضیحات
رشته های مرتبط مدیریت
مجله رفتار حرفه ای – Journal of Vocational Behavior
دانشگاه گروه مشاوره و روانشناسی آموزشی، ایندیانا، ایالات متحده
نشریه نشریه الزویر
مجله رفتار حرفه ای – Journal of Vocational Behavior
دانشگاه گروه مشاوره و روانشناسی آموزشی، ایندیانا، ایالات متحده
نشریه نشریه الزویر
Description
1. Introduction Brown (2015) argued that the field of vocational psychology still has a way to go in establishing the empirical efficacy of career counseling interventions. Often when examining the effectiveness of interventions, one looks to meta-analyses, and there are a number of meta-analyses that have been conducted on career interventions (e.g., Brown & Ryan Krane, 2000; Oliver & Spokane, 1988; Spokane & Oliver, 1983; Ryan, 1999; Whiston, Sexton, & Lasoff, 1998). Oliver and Spokane (1988) extended the meta-analysis conducted by Spokane and Oliver (1983) and included studies published from 1950 through 1982 and found an average effect size of 0.82 using the Glassian (delta) method. Updating this meta-analysis, Whiston et al. (1998) used more sophisticated meta-analytic techniques (i.e., weighting effect sizes by the sample size and inverse variance) and found a weighted mean effect size of 0.30 using studies published between 1983 and 1995. Both Whiston et al. and Oliver and Spokane included a broad array of career-related outcomes. Later, Brown and Ryan Krane (2000) extended the series of meta-analyses conducted by Ryan (1999), which focused on career choice outcomes (e.g., congruence, vocational identity, career maturity, and career decision-making selfefficacy). Ryan included all relevant studies from Oliver and Spokane as well as studies published between 1983 and 1997. Although Ryan conducted six separate meta-analyses based on specific outcomes, Brown and Ryan Krane averaged the effect sizes across outcome categories and reported a weighted mean effect size of 0.34. Brown and Ryan Krane used a system similar to Whiston et al. for calculating effect sizes. Although there is consistency between the overall effect sizes found by Brown and Ryan Krane (i.e., 0.34) and Whiston et al. (i.e., 0.30), these vary from the overall effect size of 0.82 found by Oliver and Spokane (1988). Although both Brown and Ryan Krane and Whiston et al. used more sophisticated methods, which may partially explain the discrepant findings, there is still a need for further explorations of the effectiveness of career interventions due to variation in average effect sizes. There also is a need for another meta-analysis of career interventions as the most recent meta-analysis (i.e., Brown & Ryan Krane, 2000) is N16 years old, and there has not been a meta-analysis of career choice interventions conducted since that time. There has been a recent meta-analysis of job search interventions (i.e., Liu, Huang, & Wang, 2014), which found that the odds for obtaining employment were 2.67 times higher for job seekers who participated in a job search intervention as compared to those in the control group. Whereas Whiston et al. (1998) combined both job search and career choice interventions, it provides more detailed information to practitioners if these approaches to career counseling are separated. Thus, there is a need for another meta-analysis of career choice interventions that includes more recent research. Another reason for an additional meta-analysis of career choice interventions is that, although both Brown and Ryan Krane (2000) and Whiston et al. (1998) used more sophisticated meta-analytic procedures, both of these utilized fixed-effect models rather than random-effects models. Whereas fixed-effect models were more common in the past, random-effects models are increasingly popular due to the generalizations that can be made from random-effects results (Fields & Gillett, 2010; Hedges, 2009). With a fixed-effect model, it is assumed that the participants from every study come from the same population. As a result, if the population parameters are actually different across studies, then the probability of a making Type I error can increase beyond the accepted alpha value of 0.05 (Hunter & Schmidt, 2000). Hunter and Schmidt estimated that utilizing a fixed-effect model rather than a random-effects model might increase the alpha rate from 5% to 11–28%. Therefore, when utilizing a fixed-effect model, one should only make conclusions about the sample of studies included in the meta-analysis. With a random-effects model, the researcher recognizes that the samples from studies might come from different populations. The main difference between the two models is derived from sources of error (Fields & Gillett, 2010). For a fixed-effect model, there is only one source of error – sampling error. For each study, the theory is that a representative sample of participants is chosen from one population. Results from this sample can only provide an estimate of the population parameter, and thus error is introduced. In contrast, a randomeffects model takes into account that the studies’ samples might come from multiple populations by introducing a second sampling error term. Results from the populations represented in the sample of studies only provides an estimate of the ‘superpopulation’s’ parameter (Hedges, 1992). Thus, it is possible to generalize to other studies or situations that could have been studies because of the statistical techniques involved in random-effects models (Hedges, 2009). Hedges (2009) recommended a random-effects model when the intent of the meta-analysis is to inform public policy or to generalize to situations that have not been explicitly studied. Thus, the meta-analyses presented in this manuscript utilized a randomeffects model.