01 May 2024

Rama: a Storm is brewing

unaffiliated technical thoughts building on Rama by Red Planet Labs

AJ LaMarc
AJ LaMarc @ajlamarc

If you’re interested in or currently building something with Rama, I’m interested in learning more. Best place to reach me is on Twitter.

I spent the last 5 months owning the production Rama cluster and backend infrastructure for a startup. This was a great privilege and learning experience - as the first team to deploy Rama in production outside of Red Planet Labs, we worked closely with Nathan and his team. I am in position to provide unaffiliated thoughts on Rama and justify the hype, but also point out any current gaps or scenarios in which I wouldn’t use it.

This post will start by reviewing Rama’s claims directly, then, I will share more about our experience, finally, when you should and shouldn’t use it and how that will change as Rama matures. If Rama is new to you, consider reading RPL’s official posts (linked in footnotes).

1. Rama quotes, addressed

Rama is a new programming platform implementing a distributed-first paradigm that will radically improve your ability to build applications. By learning Rama you’ll not only add a powerful tool to your development toolkit, you’ll learn a new way of thinking about programming. In the same way that learning about declarative programming helps you become a better programmer if you’ve only been exposed to imperative models, learning Rama’s model will give you a new perspective on problem solving that you’ll be able to apply to any programming challenge. [1]

100% agree. Rama’s dataflow programming and architecture emphasizes the “right way” of building applications (event sourcing plus materialized views) [2]. Rama provides low-level control of partitioning, batch processing, etc. without being cumbersome, so it is perhaps the best place to learn distributed system architecture [3].

Rama doesn’t just improve programming for applications that operate at huge scale. Its integrated approach vastly simplifies application development in general. It lets you focus on the logic of your application instead of being encumbered by one low-level hurdle after another. [1]

Mixed feelings. Let me define “improvement” as speed or the rate of stable features released. There were some places Rama shined, other places were slower for us to work through than alternatives would have been, given no past experience with Rama. Some developer experience problems were a result of Rama’s “internal” immaturity, like unhelpful error messages, slow-running unit tests, or sparse usage documentation. Others were caused by Rama’s “external” immaturity, i.e. lack of language and external library support including integrated examples.

The rama-demo-gallery, rama-aws-deploy, and twitter-scale-mastodon were greatly helpful and enabled us to get Rama to production within acceptable time constraints. However, there’s much more work to be done to make Rama accessible to a wider audience [4].

And if your application becomes popular? Well, it already scales. [1]

Unlike internal enterprise products, pre-PMF startups cannot forecast scale. Still we like to worry about scaling, instead of the main problem, which is building something people want. Scaling is a bounded problem, so tackling it feels productive and satisfying when it’s really procrastination. Migrating to more scalable infrastructure is a fixed cost, and a sound investment when you are flush with cash, not before.

IMO this quote plays into that tendency to attract you to the “optimal” solution even when you may not need it. Rama needs to become best-in-class for day 1 projects up to massive scale to land as a unicorn (I expand more on this later).

Rama unifies computation and storage into a coherent model capable of building end-to-end backends at any scale in 100x less code than otherwise. [1]

Mostly agree. Rama’s architecture uniquely combines computation and storage, meaning that it can provide much higher throughput, lower latency, and lower cost at scale than other systems. It does this while being much easier to maintain (say 10x if you have an expert Rama developer), so you can invest in increased development speed or reducing developer cost.

Nathan also notes that “lines of code” isn’t the perfect representation of complexity, but it is a decent proxy, which I agree.

RPL’s Twitter-scale Mastodon example was implemented in around 100x less code than the alternative legacy systems. This is a best-case which depends on the type of application you are building [5]. Also if your application is small scale, as our startup was, the reduced variable cost doesn’t come into account but the fixed costs are a clear downside.

Rama integrates and generalizes data ingestion, processing, indexing, and querying. Rama is a generic platform for building application backends ...

True. It’s clear we had only scratched the surface of Rama’s capabilities, and it becomes more and more advantageous to use it as complexity expands. A good model for complexity is how much of the necessary capability is not the basic CRUD model? As a day 1 startup you may only have time / need for a CRUD backend, in which case it’s less advantageous.

For example, I didn’t mention “fine-grained reactivity”, a new capability provided by Rama that’s never existed before. It allows for true incremental reactivity from the backend up through the frontend. Among other things it will enable UI frameworks to be fully incremental instead of doing expensive diffs to find out what changed. [6]

This is another major unlock for the programming industry which doesn’t exist yet. Someone needs to build it… [7]

The complexity and cost of deployment is an artifact of the development model which currently dominates software engineering, what I call the “a la carte model”. On the surface, the a la carte model is attractive: pick the most optimal tool for each part of your architecture and make them work together. ... The software industry has been stuck near a local maximum for a long time with the a la carte model. The current state of databases is a consequence of backend development not being approached in a holistic manner. Rama is a paradigm shift that breaks free of this local maximum. The benefits of breaking out of that local maximum are very consequential, with a dramatically lower cost of development and maintenance. The 100x cost reduction we demonstrated with our Mastodon example translates to any other large-scale application. Small to medium scale applications won’t have as extreme a cost reduction, but the reduction in complexity is significant for smaller scale applications as well. [8]

Agree. It is a new paradigm, very attractive to large-scale applications, and we can see that Nathan also notes it as less attractive to smaller scale applications. It excites me to think about it getting over the hurdles at small scale as well.

Being able to use Rama everywhere introduces compounding to the ecosystem, where FAANG developers may enjoy it outside of work and introduce it to their companies, or vice versa by using it in enterprise and leaving to start a company atop Rama.

I also didn’t mention Rama’s integration API. Because of my description of Rama as being able to build an entire backend on its own, you may have the impression that Rama is an “all-or-nothing” tool. However, just because Rama can do so much doesn’t mean it has to be used to do everything. We’ve designed Rama to be able to seamlessly integrate with any other tool (e.g. databases, queues, monitoring systems, etc.). This allows Rama to be introduced gradually into any architecture. [6]

Rama does support external integrations (see rama-kafka), which is important for migrating legacy systems to it. Since Rama benefits greatly from tight integration, I don’t think it would be attractive to use as a minor part of a current a la carte system, as you might do with ex. Kafka or Redis.

Rama’s dataflow API is a composable abstraction for distributed computation, enabling you to seamlessly combine regular logic with partitioners, yields, and other asynchronous tasks.

Rama’s dataflow API and path-based transforms and queries really are that good. A small learning curve, but very worthwhile.

2. Our experience

In the 3 months I was managing our running cluster the Rama deployment was rock-solid and always responsive. Similarly updates to the cluster were quick and painless, if we did run into any issues Nathan was always responsive and helpful with 1:1 support. Updates were tested on a staging cluster first to check for any complications.

Many of our data representations were CRUDish and had not benefitted greatly from Rama. One of the places we got to stretch its legs some was in adding a chat timeline and messaging support, which were implemented simply using the KeyToLinkedEntitySetPStateGroup helper. This was a clear example of the simplicity unlock of accessing arbitrary data structures in your application.

Currently Rama requires working with Java or Clojure at the API level, or dealing with impedance mismatches and the greenfield development to wire up a new language. I believe most developers, outside of Java enterprise, are less familiar with the JVM ecosystem, myself included.

We chose Clojure since it is much more lightweight and flexible than Java. Between the two I still like this choice: personally I also found Rama’s dataflow / ETL logic easier to follow in Clojure than the Java alternative reminiscent of the builder pattern.

Since Rama launched as Java-only, the Clojure documentation and examples were less mature, and we had to figure out little things along the way which added up to significant time loss. Most of these were not caused by Rama itself but the necessary surrounding ecosystem.

The best matches we found to work with Rama were Luminus, an API framework, and clj-thrift, which handles serializing and deserializing Apache Thrift datatypes in Clojure. These are both old and unmaintained, and needed fiddling to get working. I wasn’t fully satisfied with our setup but at the time we had to keep moving. I plan to open-source some of these pieces soon to make similar projects easier for other Rama developers going forward.

The cost of running the Rama cluster at “future-proof” scale, plus Rama licensing cost, would have been far out of my personal budget (at least for a side project), but the company owners were willing to pay for it.

3. Conclusion

Rama does live up to the hype, as an early beta product there are some rough edges. From what I know RPL is currently more focused on finalizing key features such as backups, database migrations, etc. before focusing more on developer experience. In the meantime I would keep a close eye on it (or try to get a job at RPL).

The elegance / holistic approach and almost “algorithmic” process of building applications from an original, natural-language description of a feature makes Rama a prime target for AI integration. I believe this is a natural fallback or “pivot” for RPL, as well as representing a much larger potential market, if developers end up too stuck in the past to accept a new paradigm at scale [10].

Is Rama worth learning?

Yes, it makes you a much better programmer and system architect (see the first quote on this post).

Rama is both novel and highly structured, which lets you practice a new pattern of design transferrable to any project. There’s much more worth calling out, but this “new” pattern of design was my biggest takeaway. The pattern is reminiscent of inversion, a mental model, but applied to the programming context.

Outside of Rama, a beginner programmer would start by accepting data via an API, then creating tables to store that data, and finally writing queries to join tables and output data in roughly the needed form. They would not discover until late in their development cycle if the data representation ergonomically supports the necessary queries - they built the API and tables first.

A more experienced developer outside of Rama would design their tables first, then the query and API logic concurrently. They are forced to do this because of the limited data representation allowed by other (non-Martian) systems. They will iterate the data representation within technical and time constraints, eventually going to production as “good enough.” With an incremental / relational approach, it is accepted that complex migrations and “one-time” code must be ran frequently to evolve data models and address bugs introduced which corrupted parts of the data.

Whereas with Rama you have the capability to partition, nest, and subindex common data structures without worrying about a database-specific implementation. Here’s an example of defining one of these data representations (a PState, which is short for “partitioned state”):

  {String       ;provider-id
    {:payout-id     String
    :payout-methods {String   ;id, same format as payout-id
                      {:platform    String
                        :type       String
                        :account-id String
                        :payout-key String
                        :discount (fixed-keys-schema
                                  {:base-payout-key String
                                   :expiry          Long})})}})})

Having full control over the data structures means you can approach application design differently (in reverse). You first decide what queries must be supported, then design PStates which support those queries, and finally add events and processing logic (ETL topology) to populate the PStates. “Events” would likely take the form of Apache Thrift records which go into a queue (depot):

// An example Apache Thrift record added to the event stream for processing.
struct SetSelectedLanguage {
  1: optional string userId;
  2: optional string selectedLanguage;

It is elegant in a way I haven’t seen before.

When should I use Rama?

If you are in industry or already running at scale, then I would strongly consider migrating to Rama as soon as necessary stability features are added (backups), depending on application needs, projected cost savings, and willingness of the development team. Which means now is the best time training with and learning it.

If you are a IC part of industry, I would learn Rama now for job security, more opportunities, and higher pay [9].

If you’re a newly funded startup, I’d consider it depending on your team’s past experience. Rama will be more attractive for less seasoned developers since it handles multiple pieces for you. If you have to learn a paradigm regardless, you might as well pick the more powerful one improving over time.

If your team is veterans with an existing technology stack, or if you are unfunded / “indie hacker,” I wouldn’t switch to Rama until it matures a bit more or the license structure changes [11].

How will Rama become a unicorn?

  1. Rama’s Dataflow API being writable only in Java / Clojure isn’t terrible but will scare many people away [12]. It should support common languages.
  2. Easier and more importantly than (1), my API shouldn’t need to be in the JVM to perform CRUD operations on the Rama cluster - examples should be provided for many languages and API frameworks. MongoDB directly supports 14.
  3. Besides reducing backend complexity at scale, there needs to be a more attractive reason to switch. MongoDB started the NoSQL craze and Vercel popularized Server-Side Rendering. They both provide one-click deployment and free tiers.
    1. Rama needs a strong “AI assistant” integration (harder) or to appeal to overall full-stack complexity and performance (easier). They have a silver-bullet here with fine-grained reactivity, if they are willing to put a bow on it.
    2. Rama needs one-click deployment, integrated GitHub Actions for deploying updates, and more. Terraform and SSH are scary. (Vercel would never make you do that.)
    3. Rama needs more favorable licensing or free usage, even if it requires operating at a VC-funded deficit, or a critical mass of people won’t start using it publicly to drive widespread adoption.
    4. A Rama cluster needs to be able to run at lower scale than 1 Conductor, 2 Supervisors, 1-3 Zookeepers on dedicated machines, including transitioning from this new “virtualized” to the current “dedicated” deployment.
  4. More documentation and examples across the increased scope of (1-3).

Notably these are all implementation details / “grug work” compared to the development cost and complexity of Rama itself, so I would expect them to be able to fully capitalize on their opportunity and moat.

[1] https://redplanetlabs.com/docs/~/index.html#gsc.tab=0

[2] To learn about the event-sourcing plus materialized views architecture in long-form I would recommend Nathan’s book Big Data. It’s not a coincidence that the person who most clearly explained these concepts has now created a vastly improved way of building systems with them.

[3] Sadly architecture or deployment were completely absent from my undergraduate CS curriculum. Rama would have been the best way to teach it.

[4] Vercel or MongoDB are comparable examples of unicorns. They provide a much wider surface area of examples, manage infrastructure for you, and let you jump in immediately with a free tier.

[5] An application that primarily stores and retrieves compressed byte data, like the one I currently work on, would not benefit drastically from Rama, since the domain model is relatively simple and maintained for backwards compatibility.

[6] https://blog.redplanetlabs.com/2023/08/15/how-we-reduced-the-cost-of-building-twitter-at-twitter-scale-by-100x/

[7] RPL may already have fine-grained reactivity enabled in their Cluster UI (which is written in ClojureScript). Hopefully you agree that ClojureScript is not a solution for adoption. We used React Query with our Clojure API to call endpoints automatically when data became stale, to keep frontend code simple. This was already starting to become slow and unscalable, and would need special attention for each endpoint.

Rather than each developer building a custom web-socket system it would be very attractive if this was bundled into a Rama frontend (JS / TS) state library. Something like Electric Clojure but more friendly.

[8] https://blog.redplanetlabs.com/2024/01/09/everything-wrong-with-databases-and-why-their-complexity-is-now-unnecessary/

[9] As Clojure developers are higher paid than other languages, Rama + Clojure developers should become higher paid than just Clojure. As Rama may represent recurring cost savings of millions of dollars for some companies, you can imagine private Rama consultants billing at $200-300+ per hour. I think Red Planet Labs themselves would plan to capture this market, however.

[10] In some sense I’d already be very concerned that developers won’t accept this new paradigm without being replaced. Startups would, hence I think targeting them with lower fixed costs is extremely important. Fixed costs meaning the financial cost of a license and running a cluster, but also the time cost of getting up-and-running with it without lots of examples.

[11] I’m referring to “license cost” vaguely in this post since it is not set fully, and Red Planet Labs hasn’t announced anything about it publicly.

[12] SQL obviously is its own language. Regardless of tradeoffs, people weren’t satisfied with that alone and demanded ORMs in their language of choice. It’s going to be the same here.