In this session, we are going to look into something that is an integral part of structural equation. Modeling. We are going to look into factor, loadings and fit statistics? Now what are factor loadings the factor loadings in cfa, that is confirmatory factor, analysis estimate the direct effects of unobservable constructs on the indicators, so the arrow flowing from the unobservable construct to the indicators like job satisfaction, is your unobservable construct, measured using five indicators, so factor loadings will help? You estimate the effect of job satisfaction on those five indicators and they help you measure how well the item is representing the underlying construct. While unstandardized estimates can be insightful, they are rarely reported in the results of a cfa. Standardized estimates are most frequently reported because they allow you to compare the weights of indicators across a cfa or confirmatory factor analysis, so standardizing an estimate? What it does is it converts your factor loadings to a zero to one scale. Now your factor, loading will range between zero to one, and this allows you for an easier comparison of indicators so which indicator is representing the underlying construct in a better way.
Additionally, squaring the standardized vector loading will give you a proportion of explained variance with each indicator and this lets. You know how much variance in the indicator is explained by the unobserved construct!
For example, if the standardized factor loading is 0. 80, then the unobserved variable explains 0. 64 or 64 percent variance of the indicator.
So how do i know if i have got an acceptable indicator? if you have a standardized factor, loading greater than 0? 70, or explains at least half of the variance in the indicator that is 0. 50? then your indicator is providing a value next place, meaning the underlying construct. Then you should retain the indicator, but what, if you are not explaining at least half of the variance that is your indicator is not explaining half of the variance that indicator is contributing little to the understanding of the unobservable construct? You might consider deleting it, but there are certain conditions that must be followed and we’ll be talking about them later? Once we have determined the standardized value for each factor loading?
We can also determine the measurement error of each indicator, and the measurement error for each indicator is simply 1? Minus r square does the lower the explained, variance in an indicator, the higher the measurement error and vice versa! Now what about setting the metrics in sem each unobserved variable must be assigned a metric, which is a measurement range! This is done by constraining one of the factor loadings from the unobservable variable, by assigning its or by assigning it a value of one. This is what we refer to as parameter. The remaining loadings are free to be estimated.
Now, the factor loading is set to 1 is acting as a reference point or range for the other indicators to be estimated. This process is called setting the metric and the indicator constraint to 1 is often referred to as a reference term, so which indicator? should i constrain to one? typically, there is no reason or like, but you can have one of the indicators set to one, and i think many researchers simply constrain the first indicator of each construct to be one! If you fail to set the metric or constrain one of your indicators, the analysis of your scm model will not run and will give you unidentified error message? In order to get the loadings standardized loading, you need to constrain one of the indicators from the unobserved variable or vector, so this acts as a reference point for the other indicators to be estimated, and then there is a range, so your loadings are between 0 and 1. . Lastly, if you are analyzing and comparing multiple samples make sure that the same indicator is constrained to 1 for each sample, for example, i’ve got two models being compared between male and females, so i’ve got to have the same indicators in each group constraint to one. So this is particularly important in covariance based scm model fit and the fit is statistics!
Now. One of the advantages of a cm is that you can assess if your model is fitting. The data, or specifically the observed, covariance matrix, the term model, fit denotes that your specified model, estimated based on the covariance matrix, is a close representation of the data observed covariance matrix, whereas a bad fit on the other hand indicates that the data is contrary to the specified model, so your data is not fitting the model. Well, the model fit test is is to understand how the total structure of the model fits the data? So if your data is not fitting the model, then there are problems. A good model fit does not mean that every particular part of the model fits well again.
The test of model fit is looking the overall model, or it’s looking at the overall model compared to the data. One caution with assessing model fit is that the model with fewer indicators per factor will have a higher apparent fit than a model with more indicators per factor. So if you’ve got more indicators, then you might have problem with your model fit so model fit coefficients reward passimi. This is one of the hallmarks of scientific research! Thus, if you have a complex model, you will find it more difficult to achieve a good model fit compared to a simplistic model? The aim of software guild will give you a plotter of model fit statistics.
There are more than 20 different model fit tests, but only the prominent ones seen in most research are now discussed and one of the most important one is model chi square test. The chi square test is also called chi square goodness of fit, but in reality, chi square is a badness of fit measure! The chi-square value should not be significant if there is a good model fit a significance means your model. Covariance structure is significantly different from the observed covariance matrix of the data. If chi square is less than 0.
05, then your model is considered to be ill-fitting? However, there is one problem: chi-square is very sensitive to sample size now, when you’ve got large sample size, even tiny differences between the observed model and the perfect model fit may be found. So a better option with chi-square is to use a relative chi-square, which is the chi-square value divided by the degree of freedom, thus making it less dependent on the sample size? Now, client in 2011 stated that relative chi square value shall range between three to five. So if it’s between three to five, you can say your model is a good fit now model fit statistics determine the model fit. So there are, there is a null model, so many of the fit indices require a test comparison to be done against a null model. The default null model in amos allows the correlation among observed variables, to be constrained to 0, implying that the latent variables are also uncorrelated! So what are the fit indices comparative fit into cfi and the value range is between zero to one, the greater the value that is higher than point. Nine? Zero will show a good model fit and it’s not affected by sample size. So it’s a recommended fit statistic to report. Ifi incremental fit index so ifi should also be greater than 0. 90 normed fit index and acceptable value is over 0. 90 tli tucker lewis index again, its value shall be greater than 0. 90 as well? Now root mean square error of approximation, rmsea a critical model fit index.
This is badness of fit test where values close to 0 will show best fit a good model!
Fit is present if rmsca is less than 0. 05. This is an adequate fit if it’s less than 0. 08 and values over 0! 10 actually denote a poo of it! Standardized root mean square residual s rmr like rmsca. This is again a badness of fit test in which the bigger value will show worse fit.
So a srmr of 0! 05 and below will be considered a good fit and a fit of 0? 05 to 0! 09 is considered an adequate fit anything over that obviously will be considered a bad fit how to run it in spss. There is a plugin available.
We will be looking at it in detail soon, so what values are good fit now there is no shortage of controversy as to what is considered good, fit bentler and bonnet in 1980 is the most widely cited research encouraging researchers to pursue a model fit statistics. That is these are the fixed statistics and they should be greater than 0. 90. The rule of thumb became widely accepted, even though researchers such as who and bentler in 1999, argued that 0. 90 criteria was too liberal and that fit indices needed to be greater than 0. 95! So we’ve got references for both, so we can use both depending on our model, and our study subsequently marsh and others in 2004 have argued against the rigorous who and bentler criteria in favor of using multiple indices based on sample size, estimators and distribution? Hence there is no golden rule that universally hold as it pertains to model fit?
The criteria outlined in this section is based on existing literature and provides guidance on what an acceptable model fit of the data is even if the researcher exceeds the 0. 90 threshold for a model fit in this index? One should use caution in stating a good fit client in 2011 notes that even if a model is deemed to have possible model fit, it does not mean that it is correctly specified! So good fitting model can still be poorly explained or explaining the relationship in a model, but there are certain criteria, a certain certain rule of thumb that may be followed now! What are modification indices now? what if you are not getting a good fit? how do you solve that problem? one of the solution is modification indices. Now, modification indices are part of the analysis that suggests model alterations to achieve better fit of the data. Making changes via modification indices should be done very carefully, and you should have justification in amos modification indices are concerned with adding additional covariances between the error terms of a similar construct. Amos will give you modification indices and it will guide you. It will tell you what co-variances shall be drawn between which error terms, but there is a criteria? The modification indices will have an initial column, says mi, which is in the output, and then you use that output to draw who variances between error terms in amos the threshold is 4, where any potential modification below this value is not presented!
So it will. It can give you many modification indices, but what matters is that when you have at the modification indices at least point sorry 3.
84, and this will actually have a significant difference towards attaining a good fit now there are certain do’s and don’ts that must be followed. Now this is not allowed drawing covariance between two different constructs, so covariance between e3 and e5 is not allowed, because e3 is from one construct and e5 is from the other. You cannot draw covariance between error term and your unobservable construct? You can only draw covariance between error terms of a single construct, so this is allowed, so you can do this to improve the model fit again. These are the references that can help.
You further understand these concepts? Thank you very much.
?