I'm open sourcing a "ChatGPT For Teams" alternative

Isaac Mayolas profle photo
Isaac Mayolas
February 29, 2024

Joia Github page

I'm a heavy user of Notion. I love its simplicity, flexibility, and how easy it is to collaborate with colleagues at work. Like many, I'm also fascinated by the power of Large Language Models, and particularly by how the ChatGPT interface has made it so easy for the world to interact with them.

That's why a few months ago I started working on my own version of ChatGPT: one that unifies the collaborative features and ease of use of Notion with the power of an AI Chat.

After lots of testing and refining the product with my network of friends and some brave early-adopting companies, I am happy to open-source the project.

What is Joia

Joia is an alternative to ChatGPT, primarily designed for organizations who want to give access to AI Chats to all their members.

It is built around a shared workspace, similar to Notion, where admins can effortlessly add members. Each member, in addition to gaining access to the shared area, also has a private space to run their own queries.

The main difference with ChatGPT is that Joia is crafted for a work environment.

On one side, it filters out much of the noise associated with consumer-oriented GPTs (AI Fitness Advisor, I am looking at you), focusing on collaboration.

On the other, being an open product, it can be self-hosted for maximum privacy, and it is compatible with any Large Language Model, whether open or closed source.

Multi Large Language Model support

Large Language Model selection in Joia

Paradoxically, while the open-source LLM ecosystem flourishes, main actors like OpenAI, Google, and Microsoft are betting on Chat interfaces that gate users into proprietary products and closed-sourced models.

Well, I'd like to prevent that from happening. I think users should decide which model to use for the task at hand.

That's why I've made it easy in Joia to switch between models.

At the time of writing, I'm providing initial access to the most popular ones, including Llama 2 70B (and CodeLlama 2 70B), Mixtral 8X7B, Perplexity 70B, Gemini Pro, and more.

I will keep adding new models, but if there's one you'd like to see implemented in particular, shoot me a message on the Discord #feedback channel that I created for this purpose.

Currently, all models are run by external cloud providers, and today the platform has connectors with OpenAI, Amazon Bedrock, Hugging Face, and Openrouter, with plans to add more soon.

It is also possible to extend Joia with your own provider and models. It's not yet documented how, but it should be quite straightforward to do so by copying and pasting the examples of the existing providers. Beware that the implementation is still in its early stages and might change in upcoming releases.

Collaboration through Chatbots

Chatbot setup

OpenAI refers to them as "GPTs", but since Sam Altman admits they're not that great at naming, I'll settle with Chatbots.

Chatbots are one of the biggest productivity boosters that businesses can use. They are, in essence, a predefined prompt instructing how to respond to user input. And since great prompts are key to get a great response, it makes sense to have a way to create, iterate, and share them.

I particularly find it useful in software development, when writing tests. Shooting a piece of code to ChatGPT and asking to write tests doesn't quite work. The tests won't match your codebase's style, they won't use the same libraries, or test what you'd like them to. By having a predefined prompt with precise instructions and examples, you can ensure that the tests are written the way you want.

I see in chatbots the area with the most potential for growth. I plan to add support for extra functionality like chatting with your data, linking with Langchain, and a more granular permissions system.

I also see a lot of potential in creating a more sophisticated input/output interface, similar to what Gradio or Streamlit provide. That would allow admins to create more flexible chatbots that better accommodate their internal business needs.

A cost-efficient alternative

Cost savings may not be the primary goal of the project, yet I help companies achieve just that.

Today, implementing OpenAI's ChatGPT in a company costs $30 per seat per month. And while the cost might be justified for some roles, it quickly escalates when you want to roll it out to the entire staff.

In contrast, my estimates indicate that the average spend per user in API credits is about $7/month. This data is based on early adopting companies of Joia, where some users initiated 0 chats/month while others initiated more than 2000.

This represents a staggering 75% cost reduction. An amount that will decrease further as users adopt open-source models and the cost per token from AI providers continues to decrease.

Here's a handy table that exemplifies how much a company can save based on its size.

Number of employeesChatGPT annual cost*Joia annual cost**Annual savings
* "ChatGPT for Teams" cost of $30/user/month | ** Assumes an average consumption of $7/user/month in API credits.

Cloud version

If you're interested in trying Joia, or believe life is too short to self-host, I've made a cloud version available.

Simply sign up with your Google account and you'll be up and running in a matter of seconds.

One of the benefits of the cloud version, compared to the self-hosted solution, is the ability to purchase credits directly on the platform. This means you don't have to create and manage any API keys from the AI providers.

Join the community

If you enjoy the project and would like to see it expand, I'd appreciate it if you could star it on Github, as this helps me spread the word.

I'll be sharing updates and insights from @joiaHQ on Twitter. If you want to stay informed about the project, simply give it a follow.

Lastly, to contribute, share feedback, or tell me what you would like me to build next, join the #feedback channel on Discord.