Member-only story
How Meta’s Open Source AI Threatens the Future of Trust in AI
The recent announcement by Meta that their large language model, Llama 2, would be released as an open-source resource has stirred significant debates and implications within the AI community. The move marks an unprecedented shift from the traditional closed and controlled approach adopted by the leading AI players, OpenAI, Google, Inflection AI and Anthropic.
Meta’s decision to offer Llama 2 for free to individuals and organisations worldwide has sparked both excitement and concern about the potential consequences on AI safety and ethics. In contrast, Anthropic, a research organisation that has raised over $1.5 billion to build a safe and trustworthy AI platform, has taken a different path. Anthropic have focused on the development of safe and beneficial AI systems through their Constitutional AI approach and emphasis on “ mechanistic interpretability “.
Meta’s Open Source Gamble
Meta’s decision to make Llama 2 available as open source has far-reaching implications. Firstly, it grants unrestricted access to advanced AI capabilities, enabling individuals and organisations to freely build upon and utilise the technology. This level of accessibility was unimaginable until recently, and it represents a significant step towards the democratisation of AI. (That’s a good thing…