←back to Articles

Open Letter to Georgia Tech on Open Source AI in Education & Research

I was disappointed to discover today that Georgia Tech‘s brand new Open Source Project Office (OSPO) has unilaterally endorsed the Open Source Initiative’s Open Source AI Definition (OSAID), suggesting that the school itself supports it (which fortunately does not appear to be the case). They’re clearly in over their skis and haven’t thought this through given the chilling effects opaque closed-source models have on education and research.

The claim that “this effort allows the open source community to better debate and define approaches to create ethical and fair AI systems” is absurd because you need the training data to deal with bias. Furthermore, the suggestion that it “will promote transparency and collaboration in AI development”, while not insisting that the source (i.e., data) be shared is without merit.

That’s why I posted this letter encouraging College of Computing leadership to assess whether this endorsement is appropriate for the GT OSPO, let alone the college or the school, additionally calling out the head-spinning circular references between Arthur P. Sloan Foundation sponsored entities (GT OSPO, OSI, CMU ENGIN/OFAI, etc.):

I’m an MS CS (ML) student at Georgia Institute of Technology (for/until now) and it pains me to see my school betray its students by supporting the Open Source Initiative (OSI)’s Open Source AI Definition (OSAID).

A Debian developer, I’ve had to drop this trimester’s courses to fight this dangerous fork of the Open Source Definition (launched along with the OSI itself by Debian developers in 1998) that does not protect the four essential freedoms of Free Software and thus undermines Open Source.

There’s a lot more to the OSAID than meets the eye. A broken process has produced a broken product that has already been rejected by the Open Source community and must be repealed. A former OSI board agreed “a process that is not open cannot be trusted to produce a product that can be considered open”.

I ask that Jeffrey Young, Fang (Cherry) Liu, PhD, Ron Rahaman, Alessandro Orso, and of course David Joyner conduct a review of the decision to support this travesty of justice that deprives the next generation the ability to build on the last — as has been the case for Open Source for the past quarter century — and the exploited minority co-designers the ability to assess and address ethics issues including fairness and bias (Mahender Mandala).

I also note the involvement of the Alfred P. Sloan Foundation via its OSPO program (Josh Greenberg, Ruth Brenner) and have already taken the matter up with them over their support of this futile endeavour. It has been likened by trusted Open Source industry analyst RedMonk to the ill-fated Tacoma Narrows bridge, which collapsed shortly after launch due to flawed engineering. This too will collapse, but it’s just a question of how much damage it does to existing Open Source and new AI projects like Kwaai’s Personal Artificial Intelligence Operating System (pAI-OS) in the meantime.

Here’s the post I was responding to:

The GT Open Source Program Office’s Endorsement for OSI’s Effort to Define Open-Source AI

The Open Source Initiative (OSI) (https://opensource.org/) is a longstanding organization that aims to define and promote open source standards related to transparency, innovation, and equitable access around open source software and data. OSI has recently released a community-driven draft definition of open source AI (https://lnkd.in/eSH8VdnC) with a goal of establishing clear standards for what constitutes truly open AI systems.

Georgia Tech’s Open Source Program Office (OSPO@GT) has been working with campus stakeholders on establishing awareness of OSI’s efforts as well as on gathering feedback to help improve future versions of this definition. OSPO@GT supports the OSI’s initiative to create a comprehensive, community-defined definition of open source AI. This discussion benefits local researchers and educators as they develop, deploy, and utilize the latest AI models and frameworks in new environments, such as Georgia Tech’s new AI Makerspace. Additionally, this effort allows the open source community to better debate and define approaches to create ethical and fair AI systems, just as previous efforts to define open source licenses have helped to define how open source software can be best used and adopted to benefit broader communities outside the originating organization. We believe this effort will promote transparency and collaboration in AI development, and we look forward to further discussions and efforts to help define and support open source AI.