Meta, in an announcement aimed at elucidating its content recommendation algorithms, stated it's preparing for behavior analysis systems that are "orders of magnitude" bigger than large language models like ChatGPT and GPT-4. The question arises: Is such a large scale necessary?
Occasionally, Meta reiterates its commitment to transparency by explaining how its algorithms operate. The explanations can be enlightening, perplexing, or a mix of both.
The social and advertising giant shared an overview of the AI models it utilizes. For example, distinguishing between a video featuring roller hockey or roller derby, despite the visual similarities, aids proper content recommendation.
Meta has been a leading player in multimodal AI research, which amalgamates data from different modalities, like visual and auditory, to comprehend content better. Although few of these models are released publicly, their internal use in enhancing "relevance," a synonym for targeting, is often disclosed.
In its plan to increase computational resources, Meta revealed the aspiration to create recommendation models that can house tens of trillions of parameters, significantly larger than the largest existing language models. While these models remain theoretical at present, Meta expressed a clear intention to ensure these large models can be trained and deployed effectively at scale.
While the company has neither confirmed nor denied actively pursuing models of this size, the aspiration to create such models is apparent. These models aim to analyze user behavior, and a question arises about the necessity of such large models even for billions of users.
The reality is the problem space is immense. Billions of content pieces and their related metadata, along with complex correlations between user behaviors and preferences, contribute to the potential size of these models. Yet, being "orders of magnitude larger" than existing models can still be intimidating.
Meta's proposed AI model would ingest every action a user takes on its platforms and output a prediction of the user's future actions or preferences. While TikTok has pioneered such algorithmic tracking and recommendation, Meta's aspiration to build the biggest model can be perceived as intimidating or invasive.
Meta's technical jargon aims to impress advertisers and convince them that the company's AI is proficient at "understanding" people's interests and preferences. Despite user dissatisfaction and multiplying invasive advertising, companies like Meta and Google continue to sell ads with increasingly detailed and precise targeting.
This development reinforces the need to scrutinize the methods used by these companies for ad targeting, and whether the high tech approach to understanding user preferences is indeed superior or just a smokescreen to justify high investment in advanced AI technologies.