I've been exploring word2vec - a database of words categorised on 50 or so categories all generated from machine learning analysis of massive corpora. I found a fun website where you can conduct your own analyses of the word2vec database: https://lamyiowce.github.io/word2viz/ This gives some default choices to investigate (such as jobs by gender), but also allows you to set up your own analyses. To investigate sexism in language use, first I chose 'Empty' for 'What do you want to see?' Then I set the X-axis as 'good' and 'bad' and the Y-axis as 'he' and 'she'. I then put in pairs of words representing stereotypical male and female jobs (actor - actress, waiter - waitress, priest - priestess, monk - nun, manager - receptionist, pimp - prostitute). In all of these cases, as we would expect the stereotypically female job was higher in the graph (more associated with 'she'). What I found really worrying is that also all of the stereotypically female jobs were to the right (more associated with 'bad'). The only pair I could find where the female job was 'better' than the male is doctor - nurse.
From a critical perspective, the argument that positions of power are more likely to be associated with males than females (e.g. manager - receptionist) has been made numerous times, but I'm not aware of any research suggesting that female jobs are viewed more pejoratively than male jobs. If you put in other pairs, does this pattern hold? Is it worth doing research on this?
Any ideas of other fun ways of exploiting this tool?