Llama 3.2

Llama 3.2 is Meta's September 2024 release adding multimodal vision capabilities to the Llama family for the first time. The release included 11B and 90B vision-language variants alongside smaller 1B and 3B text-only models designed for on-device deployment (phones, edge devices). The multimodal Llama 3.2 variants accepted text and image inputs, supporting use cases like image captioning, visual question answering, chart and diagram understanding, and document analysis. The 1B and 3B text models brought competitive small-model capability under the Llama community license — supporting on-device assistants, embedded copilot features, and edge inference scenarios. Real-world deployments include Qualcomm and MediaTek demos of Llama 3.2 running on Snapdragon and Dimensity SoCs, and the various phone-based AI features built on the small variants. Available on Hugging Face and through partners. AI governance, AI compliance, and AI risk management programs use Llama 3.2 small variants for privacy-sensitive on-device AI — supporting responsible AI through local processing in enterprise AI deployments.

Centralpoint Routes to Llama 3.2 for Vision and Edge: Oxcyon's Centralpoint AI Governance Platform routes vision queries to Llama 3.2 multimodal and lightweight workloads to 1B/3B variants — alongside OpenAI, Gemini, and other models. Centralpoint meters consumption, keeps prompts and skills on-prem, and embeds chatbots into your portals via one JavaScript line.


Related Keywords:
Llama 3.2,,