Story of the Authoritative DAO

Burak Benligiray
13 min readJun 30, 2021

--

I’ll be publishing opinion pieces about API3 here and post them on my Twitter to create a sort of an RSS feed

I’m already writing a series for the API3 Medium publication that describes the authoritative DAO in great detail. I want that to be the definitive blueprint to the API3 DAO. That’s why I didn’t mention the development process there, as that can’t be recounted objectively.

Each time I lose a great deal of sleep over something, I later end up finding out that I’ve learned a lot from it. That’s because the reason I can’t sleep is not stress, but my mind running to make sense of something that doesn’t fit its model of the working world. The development of the DAO was one of such episodes and this is its story.

Initial plan and its justification

A lot of people find it difficult to differentiate API3 from a typical blockchain project, and frequently categorize it as a DeFi project. Stemming from the same misunderstanding is people asking when the product will be released, or getting hung up on release dates. The matter of the fact is that API3 aims to build an ecosystem of first-party oracles, and do this in an actually decentralized way. Building an ecosystem and not only a product comes with an immense scope, especially at the technical side. In terms of scale and potential, this is by no means comparable to a DeFi product or price data feeds for DeFi.

We knew what we were getting into and that we had no time to spare, so Andre started implementing Airnode as soon as it was fully conceptualized (the first commit was on June 8, 2020) and we got it to a feature-complete state as early as a few weeks after the token distribution has ended, around the New Year (and keep in mind that we weren’t allowed to keep our head down and focus on it, this coincides with the writing of the whitepaper, fundraising, the preliminary DAO work that I’ll talk about below and all the other invisible chores around the launch of a project). Of course, this was followed by breakneck-speed development of other technical components that will be required for the API3 ecosystem to be built on.

Although we’re good at execution, we have to be extremely selective about our development focus due to the sheer amount work to be done even for a minimum viable ecosystem. Considering the fact that we hadn’t developed a customized DAO before (except Curve Labs, which were entirely focused on Prime DAO at the time), it made more sense to outsource the authoritative DAO development and focus on the core oracle solutions, which we would be more effective at. Then, the answer was obvious: Find a founding partner with the required competency, time and motivation to build the DAO.

Careful steps

The easiest way to botch a whitepaper is to propose something that is impossible to implement due to a fundamental theoretical reason — and I’m not necessarily talking about math-y stuff, for example the proposed solution may have to be supported by an ecosystem that cannot realistically be bootstrapped. A worse scenario is to successfully raise funds with such a whitepaper and having to hand-wave away why you’re not executing it for years on end.

I take defining concrete and realistic deliverables, and then executing them very seriously (and this is one of the main criteria that DAO proposals should be judged on, the other being the budget). For this reason, almost everything in the API3 whitepaper is either based on past experience with Honeycomb, or I have prototyped them before or while writing the whitepaper. The staking pool is similar, as I had built a prototype as early as September 2020 (around the whitepaper release) because I needed to make sure that the reward and collateralization mechanics described were technically feasible (and all are implemented exactly as described in the final version of the authoritative DAO).

API3 DAOv1

The issue we had at this point was that although we had a prototype, we needed a production-ready DAO right away to receive the VC funds and do the public token distribution in a completely decentralized way. We decided to set up a DAOv1 where the voting power is hardcoded as the founding member token allocations (which are time-locked anyway), which is quite easy to do on any DAO framework. However, I was aiming for two additional, slightly more exotic requirements (again, these are both satisfied by the final version of the authoritative DAO):

  1. The DAO should support proposals for any arbitrary transaction (such as this, which time-locks tokens) and the voters should be able to tell what this arbitrary transaction does before voting on the proposal.
  2. We should be able to monitor and interact with the DAO directly, without going through a potential point of failure such as a subgraph node.

We briefly were in contact with DAOstack for a founding member-level partnership, mostly because they are well-networked with a large number of existing and potential partners. DAOstack was very focused on Alchemy 2.0 development at the time and wouldn’t be able to provide the kind of support we would need. The problem here was that we found this out gradually, and for about a month, I worked on getting a DAO up that can make arbitrary transactions on Alchemy (which I kinda succeeded at) and then tried to find out a way to interact with the DAO without depending on the DAOstack subgraph node (now that was difficult). As a separate note, although Compound Governance is one of the more obvious solutions today, it wasn’t nearly as tested then as it is now, and gave a strong “use at your own risk” vibe (and looking at it today, I don’t think it would meet our needs without significant modification).

At this point, Heikki strongly suggested that we should look for an alternative, which was really helpful because my persistence — don’t change horses midstream — was getting unproductive. After a day of hacking on Aragon, I found out that it did what was needed (though I had to implement a tool to verify arbitrary proposal specifications, which is now integrated to the authoritative DAO dashboard). What’s nice about Aragon is that it’s quite battle-tested, which helped API3 DAOv1 execute the public token distribution and secure one of the largest DAO treasuries to date with no issues.

Handing off the development

Now that the crisis was averted, we went back to the original plan of handing off the development and focusing on the core solutions. In the same week that we received the VC funds and even before we executed the successful public token distribution, we handed off the DAO development to a contractor team. Note that the proposal was passed on November 19 and was the first proposal that the DAOv1 passed, which we have done deliberately to show our dedication to migrate to the authoritative DAO as soon as possible.

I was aware of the problems that could arise from outsourcing, which is why I wanted to structure the process with this article. Here, it is foreseen that two core DAO employees act as the patron (read: product manager) and the overseer (read: program manager) and guide the external team so that what they deliver is actually what the DAO needs. The external team needs to have their own project manager, essentially to crack the whip as necessary.

Right off the bat, a grave mistake was made: Dates were given in the proposal for the entire monolith. The whole point of breaking monoliths down into undertakings (read: sprints) is to be able to update estimations in an agile way. Instead, this ended up being an extremely premature DAO launch announcement. This date was initially February 15th and after I questioned its feasibility, it was postponed to March 1st. Here, I consider this as my slip-up, I should have had the dates removed completely instead, but I was overwhelmed by other work and this flew past me.

>Development hell

The ambitious deadline indicates the problem: The undertaking team assumed that since the staking pool contracts are given, integrating them to a generic Aragon DAO and implementing a frontend for that should be trivial. The team, the budget, the timeline were all planned around this gross underestimation. The specifications given at the end of the first undertaking were overconfident, and little to no contract work was done until January 15th, two months after the project has begun.

To give context, we scheduled the audits in January, and both Solidified and Quantstamp agreed to do the audits on March (though Quantstamp then postponed the audit to April on February 25). I was doing weekly calls with the undertaking team as the project patron during this time and coming into the realization that things were looking bleak both at the contract and the frontend side. I first started reviewing code and scolding people for the lack of progress, then I found myself writing code that we were paying to be written. Time was passing and despite me dropping whole features from the initial specs, the team wasn’t even close to executing the main deliverable for the second undertaking, the audit-ready codebase.

First audit

The Solidified audit was starting on March 1st, and despite being assured by the team that the contracts would be ready by then, I requested it to be postponed to March 8th (because by then I knew that the Quantstamp audit was postponed and this wasn’t going to lose us time). On March 5th, I got surprising news from the team: The specs given as the deliverable of the first undertaking, specifically the proposal spam protection that required a custom Aragon app, was “practically impossible” to implement. Note that this is four months after accepting the project and three days left to the audit.

First, I did a call with Olivier from Curve Labs to confirm that the custom Aragon app is in fact quite doable (which turned out to be true). Then, I worked for three days straight to refactor, document and test everything (omitting the publishing flow of the Aragon app) and successfully shipped the contracts for the audit on March 8th. Needless to say, this meant a change in plans was needed.

Respite

It’s mid-March, we have an audit with Quantstamp coming up in April and the frontend will probably need to be scrapped. Providing great relief, Curve Labs offers to undertake the project, everything included. This appeared to be the ideal solution, as this is essentially their field of expertise. Without much delay, we pass a proposal for one full-time frontend developer and two part-time backend (i.e., smart contract) developers. Here, it was assumed that the majority of the backend work was going to be Aragon integration (though the frontend integration ended up being at least as burdensome). The front-end work was obviously formidable though, as it had to be started from scratch.

I let the Konstantin and Arseny focus on the Aragon work and did the Solidified audit revisions myself. Meanwhile, we greatly streamline what exactly the DAO dashboard is and how it works with Heikki and Tamara for a smoother frontend development. By the start of April, the contracts are all ready for the audit, this time including the Aragon components. A few days after shipping the contracts, I hear back that the code has changed too much since the initial scoping and Quantstamp will be able to do the audit in May (?). It’s generally misunderstood as this being a cause for delay, where in fact we had a frontend to develop anyway. We now had extra time at the contract-side of works and wanted to use this additional time as efficiently as possible. At this point, Curve Labs suggest an audit from Team Omega, which is a team of senior developers from DAOstack doing audits. Utilizing April this way ended up being extremely worthwhile, and the DAO is a much better product because of it.

Fielding the A-team

The new plan is to go through the Quantstamp audit in May (expecting everything, including the revision, to be done by the end of the month), have the frontend ready by then and launch early June. The frontend work picks up as late as the end of April, which is a bit of a problem because we expected it to be the bottleneck in the first place, yet this wasn’t even the main issue. Retrospectively speaking, expecting the DAO dashboard to be built to any satisfactory degree in the given time by one developer, no matter how good they are, was wishful thinking. After a week of observing the rate of progress and deliberating with teammates, I decide we have to take over the development and commit to do this fully in-house. I’m now working full-time on audit revisions and how contract–frontend integration will work, Emanuel takes lead on the dashboard development, Andre stops the all-important ChainAPI work to work on the frontend as well and we also pull Michal from ChainAPI to do the styling. All the while Tamara is on top of things, constantly iterating over the wireframes for good UX and Leandro constantly feeds us updated graphical designs based on that.

It’s a magical thing when you’re performing a difficult task with a high performer where you trust each other with pulling off their part. One star performance is not enough when you need to achieve something great, you need multiple of them simultaneously, consistently and directed towards the same goal. Although this is already our bread-and-butter, this was the largest team I felt that we achieved this with. No dead weight, just a skeleton crew that executes.

Closure

We receive the final reports from both Quantstamp and Team Omega in June 15 and start our closed test right after with a completed dashboard, followed by the public test. Why do I say that this was a great feat? That’s because contrary to popular belief, the development was anything but slow. Considering that we took over the development at the start of May, it was actually blazing fast, especially considering Airnode and ChainAPI development was still underway during all this. The actual problem was that the project didn’t really start in November and suffered in wrong hands, all the while being kept on life-support by me.

Lesson #1: It’s not as common as we believe to be able to execute

We tend to project our own qualities onto others. That also applies at the project-level, which had been our undoing until the takeover. Specifically, we thought if someone steps up for a task, they will do whatever it takes to see the end of it. The problem is that this is not only a matter of mentality but also means, and sometimes it’s irrational to expect what we can deliver from others.

At a practical level, this means that we must favor the core team and the teams closely-knit with it such as ChainAPI over all else while allocating resources. This is because we not only execute well, we are able to absorb, build up and utilize new high-performing members (as an example, Emanuel, who was the MVP of the takeover, has only started working with us a few months ago). In comparison, resources allocated to external development teams ended up being used inefficiently, and this will likely be a pattern.

Lesson #2: Don’t outsource what you’re not ready to do yourself

Maker DAO seems to have an unspoken rule: They do everything in-house. I don’t believe this is ideal for us. What we want to have built is too large to be built by a single entity. However, outsourcing has to be seen merely as a tool that enables more efficient use of resources, and not something that enables you to do something that you were not able to do before. Counterintuitively, if a task is not within your core competencies, you should not attempt to outsource it.

Lesson #3: Be much less hesitant about taking over outsourced projects

This lesson is a product of #1 and #2. If you have outsourced a project and things starts to look bad, it’s likely because the undertaking team is underqualified for the task. If you have outsourced it, you should already be able to do it yourself, so unless the project can be forsaken, don’t hesitate to swoop in and take over.

The mistake with the authoritative DAO development was that we had two (arguably three) rounds of outsourcing, followed by in-house execution. Instead, a project should only be attempted to be outsourced once, because (1) The first outsourcing attempt failing indicates that the task is actually difficult and it’s can now be deemed likely that the following attempts will also fail (2) In the event that the following outsourcing falls through, you’re now looking at a much longer delay that you will have to compensate for yourself.

Lesson #4: Don’t go in the trenches before you have to

This is a personal one, but my knee-jerk reaction to a project faltering is to go in and clean up the mess. As noble as that sounds, it’s rarely the most productive thing to do, and specifically, this project was too large in scale for this to be a good solution (for example I managed this at the contract-side but I’m generally helpless at frontend development.) I should have resorted to changes in strategy to solve the problem first and follow up with doing low-level work if it’s necessary (which is to be expected and not a problem).

Conclusion

In a way similar to how the audit delay hasn’t actually delayed the launch of the DAO, the delay in the launch of the DAO hasn’t delayed the progress of API3, because at no time do we work on one specific thing and not having the authoritative DAO wasn’t blocking any of our other work. This is not even visible to its full-extent internally, and I’m sure that it’s not to an outside observer. Therefore, I would say that the authoritative DAO development has been a success, especially, considering how good the end product is. Even in its MVP-form, it does everything that the API3 DAO will need to be able to do (supporting both the features from the whitepaper, and the subDAO and insurance mechanics that have been conceived since), and the user flow is extremely convenient with the auto-compounding rewards and delegations. It’s (optionally) stake-and-delegate-and-forget, similar to how Airnode is set-and-forget.

The main reason I wrote this post is that just as there were lessons for us to learn here, there are lessons for the community. The DAO can’t be expected to govern without knowing the context. Looking from the outside, one could think “It took 8 months for the API3 developers to build the DAO. They are a bit slow, maybe we should get more outside help.” while paradoxically, using outside help was the cause of the delay and someone who has full knowledge of the event will come to the exact opposite conclusion. I aimed to declassify the now-successful development process with this retrospective, and hope that it will be found illuminating.

--

--