A programming language is much more than its compiler and standard library. It’s a community. Tools. Documentation. An ecosystem. All of these elements affect how a language feels, how productive it is, and how applicable it is.
Rust is a very young language – barely a year past 1.0 – and building out and maturing the full complement of ecosystem and tooling is crucial to its success. That building is happening, but sometimes at an explosive rate that makes it hard to track what’s going on, to find the right library for a task, or to choose between several options on crates.io . It can be hard to coordinate versions of libraries that all work well together. And we lack tools to push toward maturity in a community-wide way, or to incentivize work toward a common quality standard.
On the other hand, the core parts of Rust get a tremendous amount of focus. But we have tended to be pretty conservative in what is considered “core”: today, essentially it’s
, and a couple of other crates. The standard library also takes a deliberately minimalistic approach, to avoid the well-known pitfalls of large standard libraries that are versioned with the compiler and quickly stagnate, while the real action happens in the broader ecosystem (“
is where code goes to die”).
In short, there are batteries out there, but we’re failing to include them(or even tell you where to shop for them).
Can we provide a “batteries included” experience for Rust that doesn’t lead to stagnation, one that instead works directly with and through the ecosystem?
I think we can, and I want to lay out a plan that’s emerged after discussion with many on the core and subteams.
Please leave feedback in the discuss post .
What is “The Rust Platform”?
The core ideas here draw significant inspiration from the Haskell Platform , which is working toward similar goals for Haskell.
The basic idea of the Rust Platform is simple:
Distribute a wide range of artifacts in a single “Rust Platform Package”, including:
The compiler, Cargo, rust-lang crates (e.g.
- Best-of-breed libraries drawn from the wider ecosystem (going beyond rust-lang crates)
Best-of-breed Tools drawn from the wider ecosystem (e.g.
clippy, NDKs , editor plugins, lints)
- Cross-compilation targets
- “Language bridges”: integration with other languages like JS, Ruby, Python, etc.
- The compiler, Cargo, rust-lang crates (e.g.
Periodically curate the ecosystem, determining consensus choices for what artifacts, and at what versions, to distribute.
In general, rustup is intended to be the primary mechanism for distribution; it’s expected that it will soon replace the guts of our official installers, becoming the primary way to acquire the Rust Platform and all that comes with it.
As you’d expect, the real meat here is in the details. For example, it’s probably unclear what it even means to “distribute” a library, given Cargo’s approach to dependency management. Read on!
The most novel part of the proposal is the idea of curating and distributing crates.
The goal is to provide an experience that feels much like
, but provides much greater agility, avoiding the typical pitfalls of large standard libraries.
The key to making sense of library “packaging” for Rust is the idea of a metapackage for Cargo, which aggregates together a number of library dependencies as a single name and version. Concretely, this might look like:
[dependencies] rust-platform = "2.7"
which is effectively then shorthand for something like:
[dependencies] mio = "1.2" regex = "2.0" log = "1.1" serde = "3.0"
Metapackages give technical meaning to curation: we can provide a collection of crates with assurance that they’ll all play well together (at the versions stated within the Rust Platform metapackage).
With the platform metapackage, we can talk coherently about things like the “Rust Platform 2.0 Series” as a chapter in Rust’s evolution. After all, core libraries play a major role in shaping the idioms of a language. Evolution in these core libraries has an effect rivaling changes to the language itself.
With those basics out of the way, let’s look at the ways that the library part of the platform is, and is not, like a bigger
Stability without stagnation
The fact that
is coupled with
means that upgrading the compiler entails upgrading the standard library, like it or not. So the two need to provide the same backwards-compatibility guarantees
, making it infeasible to do a new, major version of
with breaking changes (unless we produced a new major version of Rust itself).
is forcibly tied to the Rust release schedule, meaning that new versions arrive every six weeks, period.
Given these constraints, we’ve chosen to take a minimalist route with
, to avoid accumulating a mass of deprecated APIs over time.
With the platform metapackage, things are quite different. On the one hand, we can provide an experience that feels
a lot like
(see below for more on that). But it doesn’t suffer from the deficits of
. Why? It all comes down to versioning:
Stability: Doing a
rustupto the latest platform will never break your existing code, for one simple reason: existing
Cargo.tomlfiles will be pinned to a prior version of the platform metapackage, which is fundamentally just a collection of normal dependencies. So you can upgrade the compiler and toolchain, but be using an old version of the platform metapackage in perpetuity. In short, the metapackage version is orthogonal to the toolchain version.
Without stagnation: Because of the versioning orthogonality, we can be more free to make breaking changes to the platform libraries. That could come in the form of upgrading to a new major version of one of the platform crates, or even dropping a crate altogether. These changes are never forced on users.
But we can do even better. In practice, while code will continue working with an old metapackage version, people are going to want to upgrade. We can smooth that process by allowing metapackage dependencies to be overridden
if they appear explicitly in the
file. So, for example, if you say:
[dependencies] rust-platform = "2.7" regex = "3.0"
you’re getting the versions stipulated by platform 2.7 in general, but specifying a different version of
There are lots of uses for this kind of override. It can allow you to track progress of a given platform library more aggressively (not just every six weeks), or to try out a new, experimental major version. Or you can use it to downgrade a dependency where you can otherwise transition to a new version of the platform.
There are several steps we can take, above and beyond the idea of a metapackage, to make the experience of using the Rust Platform libraries approximate using
cargo new. A simple step: have
cargo newautomatically insert a dependency on the version of the platform associated with the current toolchain.
Global coherence. When we assemble a version of the platform, we can do integration testing against the whole thing, making sure that the libraries not only compile together, but work together. Moreover, libraries in the platform can assume the inclusion of other libraries in the platform, meaning that example code and documentation can cross-reference between libraries, with the precise APIs that will be shipped.
Precompilation. If we implement metapackages naively, then the first time you compile something that depends on the platform, you’re going to be compiling some large number of crates that you’re not yet using. There are a few ways we could solve this, but certainly one option would be to provide binary distribution of the libraries through, much like we already do for
std. Likely this would work via a general mechanism in Cargo and crates.io.
extern crate. Getting a bit more aggressive, we might drop the need for
extern cratewhen using platform crates, giving a truly
Versioning and release cadence
I’ve already alluded to “major versions” of the platform in a few senses. Here’s what I’m thinking in more detail:
itself is separately versioned: the Rust Platform 5.0 might ship with
1.89. In other words,
a new major version of the platform does not
imply breaking changes to the language or standard library
. As discussed above, the metapackage approach makes it possible to release new major versions without forcibly breaking any existing code; people can upgrade their platform dependency orthogonally from the compiler, at their own pace, in a fine-grained way.
With that out of the way, here’s a plausible versioning scheme and cadence:
A new minor version of the platform is released every six weeks, essentially subsuming the existing release process. New minor releases should only include minor version upgrades of libraries and tools (or expansions to include new libs/tools).
A new major version of the platform is released roughly every 18-24 months. This is the opportunity to move to new major versions of platform libraries or to drop existing libraries. It also gives us a way to recognize major shifts in the way you write Rust code, for example by moving to a new set of libraries that depend on a major new language feature (say, specialization or HKT).
More broadly, I see major version releases as a way to lay out a narrative arc for Rust, recognizing major new chapters in its development.
That’s helpful internally, because it provides medium-term focus toward shipping The Next Iteration of Rust, which we as a community can rally around.
It’s also helpful externally, because people less immediately involved in Rust’s development will have a much easier way to understand the accumulation of major changes that make up each major release.
These ideas are closely tied to the recentRoadmap proposal, providing a clear “north star” toward which quarterly plans can head.
So far I’ve focused on artifacts that officially ship as part of the platform. Curating at that level is going to be a lot of work, and we’ll want to be quite selective about what’s included. (For reference, the Haskell Platform has about 35 libraries packaged).
But there are some additional opportunities for curation. What I’d love to see is a kind of two-level scheme. Imagine that, somewhere on the Rust home page, we have a listing of major areas of libraries and tools. Think: “Parsing”, “Networking”, “Serialization”, “Debugging”. Under each of these categories, we have a very small number of immediate links to libraries that are part of the official platform. But we also have a “see more” link that provides a more comprehensive list.
That leads to two tiers of curation:
Tier one: shown on front page; shipped with the platform; highly curated and reviewed; driven by consensus process; integration tested and cross-referenced with the rest of the platform.
Tier two: shown in “see more”; lightly curated, according to a clearly stated set of objective criteria. Things like: platform compatibility; CI; documentation; API conventions; versioned at 1.0 or above.
By providing two tiers, we release some of the pressure around being in the platform proper, and we provide valuable base-level quality curation and standardization across the ecosystem. The second tier gives us a way to motivate the ecosystem toward common quality and consistency goals: anyone is welcome to get their crate on a “see more” page, but they have to meet a minimum bar first.
One small note: our previous attempt at a kind of “extended
” was the rust-lang crates
concept. These crates are “owned” by the Rust community, and governed by the RFC process, much like
. They’re also held to similar quality standards.
Ultimately, it’s proved pretty heavy weight to require full RFCs and transfer of control over the crates, and so the set of rust-lang crates has grown slowly. The platform model is more of a “federated” approach, providing decentralized ownership and evolution, while periodically trying to pull together a coherent global story.
However, I expect the rust-lang crates to stick around, and for the set to slowly grow over time; there is definitely scope for some very important crates to be completely “owned by the community”. These crates will likely become part of the platform as well, though there may be some crates that the community maintains that aren’t high-profile enough to appear there.
The biggest open question about what I described above is: how does curation work? Obviously, it can’t run entirely through the libs team; that doesn’t scale, and the team doesn’t have the needed domain expertise anyway.
What I envision is something that fits into the Roadmap planning proposal . In a given quarter, we set out as an initiative to curate crates in a few areas – let’s say, networking and parsing. During that quarter, the libs team works closely with the portion of the community actively working in that space, acting as API consultants and reviewers, and helping shepherd consensus toward a reasonable selection. Working in an incremental way – a sort of quarterly round-robin between areas – seems like a good balance between focus and coverage. But there are a lot of details to sort out.
It’s also not entirely clear what will need to go into each minor release. Hopefully it can be kept relatively minimal (e.g., with library/tool maintainers largely driving the version choice for a given minor release).
More broadly, this post has focused on just one part of the platform: libraries. There are many other areas to explore, including the mechanics around shipping tools, what kind of cross-language integration we want to ship, how we do testing and integration, and so on. I’m hoping that this post gets the discussion rolling, and that we can develop these plans incrementally over time as we begin building up the different parts of the platform.
Although the mechanics are not all that earth-shattering, I think that introducing the Rust Platform could have a massive impact on how the Rust community works, and on what life as a Rust user feels like. It tells a clear story about Rust’s evolution, and lets us rally around that story as we hammer out the work needed to bring it to life. I’m eager to hear what you think !