Skip to content
MetaAI

Meta's FACET: Paving the Way for Fair AI in Computer Vision

Meta has released a new AI benchmark, FACET, aimed at probing computer vision models for biases. The dataset consists of 32,000 images featuring 50,000 people labeled by human annotators.

Meta

Today, Meta has introduced a new AI benchmark named FACET, an acronym for "FAirness in Computer Vision EvaluaTion." This comprehensive dataset consists of 32,000 images, labeled by human annotators, containing 50,000 people engaged in various occupations and activities.

The objective of FACET is to offer "deep" evaluations of biases in computer vision models. It covers not only demographic attributes like age and gender but also classes related to occupations and activities. Meta claims that FACET is more extensive than previous benchmarks in evaluating biases.

Meta's track record in responsible AI has been spotty at best. Last year, the company had to pull an AI demo that generated racist and inaccurate scientific literature. Critics say that Meta's AI ethics team has been largely ineffective, and their anti-AI-bias tools are "completely insufficient."

The annotators for FACET were sourced from diverse geographic regions and were paid an hourly wage set per country. However, the fairness and ethics behind the sourcing and payment of these annotators remain unclear, especially given that many annotation firms have been accused of exploiting workers.

In its first test case, FACET was applied to Meta's own DINOv2 algorithm and uncovered several biases, including gender biases. Meta acknowledges these shortcomings and plans to address them in future iterations of their AI models.

While Meta claims that FACET can be used to probe various models across different demographic attributes, it admits that the dataset is not perfect. Users must agree to use FACET only for evaluation, testing, and benchmarking—not for training new models.

While FACET appears to be a step in the right direction for evaluating fairness in AI, the effectiveness and impact of this new tool will ultimately depend on how it is used by researchers and practitioners. Meta encourages the use of FACET to understand and mitigate disparities in AI models, but the company's past performance suggests that there's a lot of work to be done.

Latest