Skip to main content
  1. Blog
  2. Article

Ellen Arnold
on 21 February 2016

Charm Partner Programme


If you’re an ISV focused on the cloud or big data, you’ll know how difficult it can sometimes be for your customers to realise the full value of your software. Juju, the award-winning application modelling tool from Canonical, automates and accelerates the deployment, scaling and integration of distributed applications in virtually any public or private cloud. It also works on bare-metal. By creating a Juju Charm for your software, you can make it easy for administrators and DevOps teams to integrate it into hundreds of other solutions. And the Charm Partner Programme (CPP), is the best way to accomplish this.

For more details, have a look at the Charm Partner Programme Datasheet.

Related posts


Johann Wolf
27 April 2026

Why Web Engineering is great

Ubuntu Article

Like many software engineers, one of my first software development experiences started with creating my own web page. Since that time 20+ years ago, a lot has changed in the web landscape. Having worked a lot in web since then, I’d like to take a moment to reflect on what I think makes web great! ...


Ishani Ghoshal
27 April 2026

Ubuntu 16.04 LTS has reached the end of standard Expanded Security Maintenance with Ubuntu Pro. Here are your options.

Ubuntu Article

Ubuntu 16.04 LTS (Xenial Xerus) reached the end of its five-year Expanded Security Maintenance (ESM) window in April 2026. If you are still running 16.04, it is critical to address your support status to ensure continued security and compliance. Your support options Now that 16.04 is in its Legacy phase, you have two primary paths: ...


Rob Gibbon
27 April 2026

Understanding disaggregated GenAI model serving with llm-d

AI Article

What is llm-d? llm-d is an open source solution for managing high-scale, high-performance Large Language Model (LLM) deployments. LLMs are at the heart of generative AI – so when you chat with ChatGPT or Gemini, you’re talking to an LLM. Simple LLM deployments – where an LLM is deployed to a single server – can ...