Discrimination and fairness are major concerns in algorithmic models. Thisis particularly true in insurance, where protected policyholder attributes arenot allowed to be used for insurance pricing. Simply disregarding protectedpolicyholder attributes is not an appropriate solution as this still allows forthe possibility of inferring protected attributes from non-protected covari-ates, leading to the phenomenon of proxy discrimination. Although proxydiscrimination is qualitatively different from the group fairness conceptsdiscussed in the machine learning and actuarial literature, group fairnesscriteria have been proposed to control the impact of protected attributeson the calculation of insurance prices. The purpose of this paper is to discussthe relationship between direct and proxy discrimination in insurance andthe most popular group fairness axioms. We provide a technical definitionof proxy discrimination and derive incompatibility results, showing thatavoiding proxy discrimination does not imply satisfying group fairness andvice versa. This shows that the two concepts are materially different. Fur-thermore, we discuss input data pre-processing and model post-processingmethods that achieve group fairness in the sense of demographic parity.As these methods induce transformations that explicitly depend on poli-cyholders’ protected attributes, it becomes ambiguous whether direct andproxy discrimination is, in fact, avoided.