Comparison of face‐based and voice‐based first impressions in a Chinese sample

Abstract

People often form first impressions of others based on face and/or voice cues. This study aimed to compare the first impressions formed under these two cues. First, we compared free descriptions based on face and voice cues and found differences in the content and frequency of the personality words. We then compiled three wordlists used for face-based and voice-based first impression evaluations separately or simultaneously. Second, using these wordlists, we compared face-based and voice-based first impression ratings and found that both had significant intra-rater and inter-rater reliability. However, using the mean of the actors' self-rating and their acquaintance rating as the validity criterion, only the ratings of ‘ingenuous’ and ‘mature’ traits in the face-based first impression evaluation were significantly correlated with the validity criterion. Factor analysis revealed that face-based first impression had the dimensions of capability and approachability, while voice-based first impression had capability, approachability and reliability. The findings indicate that stable first impressions can be formed by either face or voice cues. However, the specific composition of impressions will vary between the cues. These results also provide a foundation for studying first impressions formed by an integrated perception of voice and face cues.

Can People With Higher Versus Lower Scores on Impression Management or Self-Monitoring Be Identified Through Different Traces Under Faking?

Educational and Psychological Measurement, Ahead of Print.
According to faking models, personality variables and faking are related. Most prominently, people’s tendency to try to make an appropriate impression (impression management; IM) and their tendency to adjust the impression they make (self-monitoring; SM) have been suggested to be associated with faking. Nevertheless, empirical findings connecting these personality variables to faking have been contradictory, partly because different studies have given individuals different tests to fake and different faking directions (to fake low vs. high scores). Importantly, whereas past research has focused on faking by examining test scores, recent advances have suggested that the faking process could be better understood by analyzing individuals’ responses at the item level (response pattern). Using machine learning (elastic net and random forest regression), we reanalyzed a data set (N = 260) to investigate whether individuals’ faked response patterns on extraversion (features; i.e., input variables) could reveal their IM and SM scores. We found that individuals had similar response patterns when they faked, irrespective of their IM scores (excluding the faking of high scores when random forest regression was used). Elastic net and random forest regression converged in revealing that individuals higher on SM differed from individuals lower on SM in how they faked. Thus, response patterns were able to reveal individuals’ SM, but not IM. Feature importance analyses showed that whereas some items were faked differently by individuals with higher versus lower SM scores, others were faked similarly. Our results imply that analyses of response patterns offer valuable new insights into the faking process.