Authors
Magdalena Wischnewski
Publication date
2024/4/16
Publisher
OSF
Description
Development and validation of a new scale to measure trust in AI-powered systems Second Assessment Wave Data Collection and Analysis Three vignettes will be designed, based in part on the pre-trial vignettes but more elaborate. We aim for a sample of N= 600 participants per vignette. We determined the sample size we aim for by using rules of thumb to estimate what sample size is required for the most complex model we plan to fit to the data. Our goal is to avoid missing values via strict no-skipping rules in the survey implementation. We plan to carry out a complete case analysis. The first step is to investigate whether we can assume measurement invariance (MI). If we can, then we are going to fit one model for item scores pooled across three vignettes, if we cannot, we are going to fit three separate models, one per vignette. We are going to assess MI sequentially, starting with configural, then moving from weak over strong to strict. We are not going to investigate partial MI. We are going to use the single items as indicators. We plan to conduct the following analyses. We are going to perform them in R with the packages psych, lavaan, and semTools. We are going to use the wlsmv estimator for ordered items in lavaan. For any test statistics and fit measures, we are correspondingly going to use robust options as well. Item-wise description (per vignette):• Item-wise distributions with bar plots• Item-specific means and standard deviations• Compare with description from first-wave assessment Correlation table (per vignette):• Compute correlations between all items• Compare with description from first-wave assessment Per vignette, for the trust …