So this is a pretty big deal to me (it looks recent, just put up last October). One of my big frustrations with Matrix was that they didn’t offer helm charts for a kubernetes deployment, which makes it difficult for entities like nonprofits and community clubs to use it for their own purposes. Those entities need more hardware than an individual self hoster, and may want features like high availability, and kubernetes makes horizontal scaling and high availability easy.
Now, according to the site, many of these features seem to be “enterprise only” — but it’s very strangely worded. I can’t find anything that explicitly states these features aren’t in the fully FOSS self hosted version of matrix-stack, and instead they seem to be only advertised as features of the enterprise version
My understanding of Kubernetes architecture is that it’s difficult for people to not do high availability, which is why this makes me wonder.
Looking through the docs for the "enterprise version, it doesn’t look like anything really stops me from doing this with the community addition.
They do claim to have rewritten synapse in rust though
Being built in Rust allows server workers to use multiple CPU cores for superior performance. It is fully Kubernetes-compatible, enabling scaling and resource allocation. By implementing shared data caches, Synapse Pro also significantly reduces RAM footprint and server costs. Compared to the community version of Synapse, it’s at least 5x smaller for huge deployments.
And this part does not seem to be open source (unless it’s rebranded conduit, but conduit doesn’t seem to support the newer Matrix Authentication Service.)
So, it looks Matrix/Element has recently become simultaneously much more open source, but also more opaque.
Is there a dumb-proof installer to run a matrix server in an offline LAN party?
Even without ess or this helm chart, lots of ppl can still run matrix in k8s. But it’s good to see them to release more open source code. Thx for the news.
This helm chart is not just matrix/synapse, but also element (web ui), and “matrix authentication service”, which adds SSO/OIDC support to a normal synapse instance, which is pretty neat. I haven’t seen any helm charts that include the full matrix stack, just separate synapse or element helm charts. And helm definitely makes deploying services to Kubernetes easier than other ways of deploying applications.
The other reason why I like an official helm chart, is because I have seen unofficial one’s be stopped being maintained by the community member(s) maintaining them. With an official one, it will (probably) be maintained indefinitely.
Ppl actually moved away from helm because lots of small helm chart is abandoned. And lots of ppl’s matrix already integrated with oidc. I prefer desktop app and mobile app. Haven’t looked into web ui. The real problem for matrix is the bridges: support is minimal, lots of trial and err; code is in beta, make me worried.
Right, but you could have just made one yourself. They’re not hard to maintain. It’s just a pile of yaml
Right, but you could have just made one yourself
And then there would be a bus factor of one. It’s not just about making a helm chart for myself, it’s about having something that can be shared with the community, that doesn’t depend on any single person to be maintained and updated.
It’s about having an organization that provides “packages” for Kubernetes, for people/orgs that don’t have the time, expertise, and energy to maintain them.
I greatly respect Ananace, who is in the comments of this post, and mentioned their Helm charts. The work is excellent. But looking through the commits, it’s just one person, doing something that primarily consists of bumping version numbers. Contrast this to the Matrix ESS helm chart, where the commits consist of many more contributors, and also include feature additions to the helm chart.
Is Kubernetes so pervassive that a helm chart unlocks that many more entities?
Yes and no. There are many things that are much easier with Kubernetes, once you figure Kubernetes out.
High availability is the most notable example — yes, it’s doable in docker, via something like swarm, but it’s more difficult. In comparison, the ideas of clustering and working with more than one server are central to the architecture of Kubernetes.
Another thing is that long term deployments with Kubernetes can be more maintainable, since everything is just yaml files and version is just a number. If you store your config in code, then it’s easier to replicate it to another server, either internally, or if you share it for other people to use (Helm is somewhat like this).