Skip to content

AI Evolution: Meta Platforms Announces In-House Chip Development Project

Meta Platforms reveals its exciting journey into the world of custom chip design. Developing a chip family in-house, Meta aims to revolutionize AI tasks in content recommendation models.

Hold onto your hats, tech enthusiasts! Meta Platforms is making some serious moves in the world of artificial intelligence (AI). In a thrilling reveal on Thursday, the social media giant shared that it's knee-deep in crafting a custom chip "family" designed in-house.

Meta, the parent company of Facebook and Instagram, elaborated on its data center plans through a series of riveting blog posts. As it turns out, back in 2020, they made their first foray into chip design. The result was a first-generation chip birthed from the Meta Training and Inference Accelerator (MTIA) program. The goal? To fine-tune the recommendation models Meta uses for ad distribution and content selection for news feeds.

In these blog posts, Meta looks back on their first MTIA chip as an opportunity to learn, despite earlier reports suggesting that the company wasn't planning a wide-scale deployment of its maiden AI chip.

The inaugural MTIA chip was geared towards AI inference - a process where AI algorithms, trained on massive data sets, make decisions on what type of content to show next on a user's feed. Could it be a groovy dance video or an adorable cat meme? That's for the AI to decide!

Joel Coburn, a software engineer at Meta, shone a light on why the company decided to design their own chip. Initially, Meta relied on graphics processing units (GPUs) for inference tasks. However, these GPUs fell short when it came to inference efficiency.

Coburn highlighted, "Despite significant software optimizations, their efficiency is low for real models, making them challenging and expensive to deploy in practice. This is why we need MTIA."

While Meta is tight-lipped about the deployment timelines for the new chip or potential plans for chips that can train models, it's clear that they're making strides to upgrade their AI infrastructure.

The company is no stranger to bold changes, as it shifted gears last year, nixing plans for a large-scale rollout of an in-house inference chip. Instead, it embarked on an even more ambitious project, a chip that could handle both training and inference.