←back to Articles

Open Weights or Open Source AI?

Bigger is Better?

During his keynote at Meta Connect 2024 — the relevant parts of which I encourage you to watch in 10 minutes or 5 minutes at 2x (I’ve transcribed it to subtitles so you can) — Mark Zuckerberg announced some really exciting developments in AI, both in products (glasses, VR headsets, voice and video capabilities, etc.), but also in the release of Llama 3.2: Llama 3.2: Revolutionizing edge AI and vision with open, customizable models. Absent this announcement was the usual larger models with more parameters learned from more training tokens, suggesting we may have found a limit of size vs performance/efficiency.

The introduction of multi-modal capabilities with small and medium-sized vision LLMs (11B and 90B) is going to spark the development some really interesting applications, but it’s the lightweight text-only models (1B and 3B) that will bring AI to devices that have so far been out of its reach. And not necessarily devices with dedicated hardware either; phones are obvious candidates, but it won’t be long before we’re discussing dinner options with a Thermomix.

Not afraid to ruffle feathers though, Mark doubled down on the contentious “Open Source AI” branding used to describe Llama models in the past. Already last month Meta is accused of “bullying” the open-source community, with the Open Source Initiative (OSI) releasing a statement that Meta’s LLaMa 2 license is not Open Source in July, based on applying the decades-old Open Source Definition to its license.

This branding is no accident, and in the absence of an agreed “Open Source AI” definition, there’s little the Free and Open Source Software (FOSS) community can do about it. I have no doubt that it resonates well in the business communities Meta are targeting, and it may even have a moderate impact on their main consumer market, but it does make life difficult for actual Open Source projects like our own Personal Artificial Intelligence Operating System (pAI-OS) — for which the “Linux of AI” moniker they also claimed would be far more fitting (indeed, we aim to become the Linux of Personal AI).

Apple’s Intelligence

You’d have to be living under a rock to have missed the launch of Apple Intelligence, even if only via its dependency on the new iPhone 16 (and, happily, my iPhone 15 Pro and M-series iPads and MacBook Pros). That’s because they’ve managed to pack a lot of punch into the ~3 billion parameter models on-device, requiring the next generation of hardware to operate: Introducing Apple’s On-Device and Server Foundation Models.

If your task is too big for the device, it gets shipped off to their new Private Cloud Compute service (yes, I had to check we hadn’t travelled back to 2007 for the early cloud computing discussions!): Private Cloud Compute: A new frontier for AI privacy in the cloud. Too big for that and they’ve done a deal with OpenAI to access some of the biggest and best models available today.

While it didn’t ship with iOS 18.0, it’s coming with 18.1 and I’ve already had a few weeks testing it via Apple’s Developer Beta. We’re sworn to secrecy but suffice to say I’ve been impressed with the non-invasive nature of it doing things like summarising notification groups and suggesting responses. My favourite feature is a new focus setting that uses AI to reduce interruptions.

None of this is open though, and nor does it need nor claim to be: to use it you’ll need recent Apple hardware, which is where Llama differs in that it will run almost anywhere given the requisite resources. It comes with an envious amount of vendor lock-in other OEMs have relinquished to the likes of Google and Meta, but for many Apple users including this one, it’s more love-in than lock-in (as a Google colleague used to say).

Open is Closed

So far, the more “Open” appears in an artificial intelligence product’s branding, the less open it actually tends to be. OpenAI is anything but open, with its models staying server-side like the secret recipe for Coca Cola. And that’s fine, because nobody says that everybody has to be open. Indeed, most software products and services aren’t! But if you want to be, you should say what you do in terms of openness, and do what you say. I understand they’re changing the logo, but it’s a shame it’s likely too late to revisit the name.

The only significant exception to the closed nature of OpenAI’s models is the directly accessible deployments on Microsoft Azure, which has been a boon for both companies. If not for the board’s firing of Sam Altman from his own company on 17 November 2023, which sent shockwaves throughout the industry, I’d have wondered if this wasn’t seen internally as a strategic error from the early days that they would want to correct. Due to that event though, customers can feel safe in the knowledge that it’s now burnt into Microsoft’s DNA and isn’t going anywhere — Satya Nadella would have made sure of it in the aftermath. Sam was re-hired not long after and will likely soon receive well-earned equity in their transition from non-profit to for-profit (though I see several other executives have departed in the past weeks, with 3 today including their CTO).

OpenAI’s OpenAPI spec for the OpenAI API — say that three times quickly! — is also becoming a de facto standard, for better or worse. It’s licensed under a permissive Open Source license, which is a solid start, but the governance process is closed.

Open Weights

Having set the scene for some of the main types of AI we’re seeing in the wild today: those delivered as cloud services over the internet from huge server farms (OpenAI), those black boxes that run on your device but with which you can only interact through well-defined APIs (Apple), and those you can integrate with your own products and services (Meta’s Llama), let’s take a closer look at their openness.

In the first instance (OpenAI), all bets are off because you can’t even access the model weights; they’re sitting on servers like famous recipes in safes. In the second (Apple), to the extent there’s not obfuscation and even encryption preventing you from repurposing the models, you’re soon going to run into legal and copyright issues if you tried to do so in a commercial context at least. Better stick with the APIs they make available to you. And in the third, Meta’s Llama, you can access the weights in that you can download and run the models in whatever context you like — subject to “safety” limitations burnt into the models as well as the licenses they are made available to you under — but that’s about it.

If they would just brand their licensing as the self-describing “Open Weights” then there wouldn’t be another word said about it, but…

Mr Zuckerberg’s eagerness to shape what is meant by open-source AI is understandable. Llama sets itself apart from proprietary LLMs produced by the likes of OpenAI and Google on the openness of its architecture, rather as Apple, the iPhone-maker, uses privacy as a selling point. — The Economist

Four Freedoms

Of the four freedoms approved Open Source licenses set out to protect — see my article on The Open Source(ish) AI Definition (OSAID) for more — the only ones effectively extended to you by Meta with Llama are the limited freedom to Use their software, subject to their Acceptable Use Policy, and to Share it, again subject to restrictions. Want to build that disruptive new robo-advisor startup? Nope. Exercise your rights under the Second Amendment? That’s out too. Need to educate your employees on the security risks of spear phishing? Go directly to jail. Want to share it? Best make sure you’re following every law everywhere.

That’s assuming you’re willing to use a black box with unknown and unverifiable contents any more than you’re willing to consume food without knowing its ingredients too. We do know they’re training on public social media posts at least down under, with or without explicit consent — they’re all doing it by the way — but we don’t know much more than that. That’s why Meta won’t release its multimodal Llama AI model in the EU any time soon. This is arguably a good thing, as social media is the modern-day town square, but distribution of training data is fraught with copyright concerns despite being “public”.

Llama is also subject to the terms of the Llama 3.2 Community License Agreement, which range from manageable-but-problematic, like having to display “Built with Llama”, to catastrophic for certain fields of endeavour: it self-destructs once you have a certain (large) number of users, I expect to prevent their competitors from using it. It’s no wonder then that the Open Source Initiative (OSI) stood their ground in relation to their well-defined turf: the Open Source Definition.

The thing is, Meta are well within their rights to impose these conditions, and you’re going to find similar onerous terms in the End User License Agreement (EULA) of your favourite software. And you’re still going to use it because it’s like advanced alien technology you can bring in-house that is unmatched in the market today.

Nobody is telling Meta they have to be Open Source, but if they want to then they should follow the well-established rules. But what are the rules, beyond the OSD which only applies to the license itself?

Open Source AI

The tech industry can’t agree on what open-source AI means. That’s a problem. Open Source aficionados including myself are yet to agree on what the rules should be to protect the four freedoms for Artificial Intelligence (the other two are the right to Study and Modify the software) in the same way that the Open Source Definition (OSD) does for software. Attempts to find consensus have thus far failed, resulting in a contentious draft version of The Open Source(ish) AI Definition (OSAID) that we fear could be rammed through as a release candidate as soon as today at Nearlanda.

Despite breathless claims that we finally have a definition for open-source AI, we do not. Just today I discovered that votes held as part of the “co-design” process to determine what requirements make it into the definition — conspicuous in its absence the training data itself so models can be studied and modified without limitation — don’t even reflect the views of members of the working groups, let alone the wider community.

Indeed, one such working group — examining Meta’s Llama no less — was even granted the superpower to nullify other working groups’ votes! An ability that was unsurprisingly used across the board in the data category by a Meta lawyer, one of two employees invited to participate on the basis that they know their system better than those tasked with regulating it!

No surprise then that on eliminating these undocumented negative votes — a departure from democratic norms if there ever was one (assuming you even consider democracy a valid tool for defining technical standards) — the same methodology used to exclude training data sets now demands they be provided, thus giving us a path forward (which we can relax in future revisions if required):

We need to get the Open Source AI Definition (OSAID) done, but more than that, we need to get it right. To quote another open source old guard’s public appeal to the OSI board: “we can always make the definition more permissive if we discover that we have been too ambitious with a definition, but that it is functionally impossible for us to be more ambitious later.”

On the other hand, if we were to get it wrong…

Imprecision could lead to “open-washing”, says Mark Surman, head of the Mozilla Foundation. In contrast, a watertight definition would give developers confidence that they can use, copy and modify open-source models like Llama without being “at the whim” of Mr Zuckerberg’s goodwill. Which raises the tantalising question: will Zuck ever have the pluck to bare it all? — The Economist