API3 Core Technical Team Report, November 2022–February 2023
By providing their services in the form of APIs, businesses extend their customer base to include computer programs, in addition to humans. API3 enhances these APIs for them to be able to cater to smart contracts as well. This effort has both technical and business development aspects, and considering the size of the API market, the successful scaling of it is critical for API3’s goals. We approach this problem from two directions:
- Protocolize the integration and the business model, and build tools that utilize these protocols to enable all parties to self-serve to minimize bottlenecks
- Use efficient methods to scale up the operations that cannot be protocolized
An example to (1) is Airnode, which API providers can use to provide first-party oracle services on their own (instead of having to partner with a third-party oracle solution, which is a much more complex arrangement). The API–Airnode integration is defined by OIS, which API providers can create using ChainAPI in a way that requires very little blockchain know-how. The next ChainAPI proposal to be delivered addresses the protocolization of the business model. At the end-user integration side, byog has showcased a model that utilizes Airnode to set up data feeds at scale, without stipulating existing demand. This approach leapfrogs over the traditional flow of “dApp asks for a data feed, oracle project researches feasibility, agreement gets signed, data feed is set up” and reduces the integration process down to minutes. The dAPI team will use this model as a foundation to build a similarly frictionless user flow for data feeds aggregating from multiple sources at API3 Market.
Development of the protocol and the tooling mentioned in (1) typically needs to be regarded as operations that API3 will have to undertake internally, and thus has to be considered under (2). As you may have already noticed from the above, we adopt a team of teams approach to scale these operations up efficiently, where the teams give separate proposals, manage separate budgets, and thus are largely autonomous. Since November 2022, the three teams below have spun off from the core technical team and made their first proposals:
- The dAPI team: Responsible for data feed technical operations (except integrations) and the development of the API3 Market
- byog: Develops tooling for self-funded feeds and operates their own self-funded feeds using their own API, manages the data feed integrations for the dAPI team
- Vacuumlabs: Supports all technical teams (excluding byog) in various ways
In addition to these, the ChainAPI team still develops their platform independently, and the core technical team is still responsible for core technology such as Airnode and its protocol. This all being said, it shouldn’t be surprising that these five technical teams cooperate extensively, and the lines between the teams often become a matter of technicality during our work.
OEV
We have announced our work on oracle extractable value (OEV) in November, which was accompanied by a litepaper that the core technical team has contributed to. By doing so, we’ve answered some of the open questions from the original whitepaper, such as “how do you price a data feed service in a programmatic way” (the winning bid in the OEV auction is an exact measure of the value being created by an update) and “how do you convince projects to pay for a service that they’re used to receive for free” (the projects will practically be paid to use our data feeds, which beats free).
An important objective of the litepaper was to disperse any doubts about if this can be delivered in a timely manner, both to API3 stakeholders and potential users. Coinciding with the litepaper release, the dAPI team placed the OEV launch in the Phase 3 of their roadmap, after aggregated feeds and before security coverage support. Accordingly, the core technical team immediately started working on the OEV implementation, and once the development is complete, it will be handed off to the dAPI team to be operated. The overarching Vacuumlabs team will make sure that this hand-off happens smoothly.
Airnode
We have released Airnode v0.10 this cycle, which was a big one. The most fundamental change was to do with how the deployments are stored by the cloud provider, and the deployer CLI being revamped to make use of this. As a result, the user can now manage their deployments as easily as Docker containers running locally.
We have implemented authorizers for protocolized monetization schemes to be built on top of them. An authorizer has to be on the same chain as the requester contract, which means a lot of integration overhead in a future with many side-chains and roll-ups. v0.10 adds cross-chain authorizers, which essentially allows the Airnode to serve many chains, but require payments for these services to be made on a single chain, such as the Ethereum mainnet. This new feature will be used by the upcoming ChainAPI implementation.
We have been providing QRNG services as a free, public utility on a lot of chains. QRNG is not a service that is purpose-built to deliver random numbers, but instead uses our request–response protocol (RRP). This means that doing so allowed us to battle-test RRP across a lot of chains in all kinds of use-cases, which validated some of our design decisions and uncovered some user needs that we hadn’t prioritized. We have addressed some of these in v0.10, and will continue addressing them in the upcoming releases. This sets up the stage nicely for the upcoming RRP-based services that we’re planning.
Self-funded feeds
Publish–subscribe protocol (PSP) was originally designed to allow API3 to signal API provider Airnodes to start updating a particular data feed. byog’s work proved that one doesn’t need the full RRP/PSP, but only the sponsorship scheme of the Airnode protocol. By using the sponsorship scheme and data signed by the Airnode, one can efficiently roll out a very large number of data feeds, and allow users to turn them on and off remotely through controlling the funding of the sponsor wallet that updates the respective data feed. This scheme is exactly trust-miniminized, as it makes it impossible to censor data feed updates without collusion from the API provider.
Accordingly, we decided to postpone the PSP implementation (which is still valuable because it’s a generic protocol that can be used to build a wide variety of services) in favor of building what byog uses to do the above into Airnode. Following this, byog will have API providers set up their own self-funded feeds to act as trust-minimized fallback mechanisms. Note that this has implications beyond the security add; it allows arbitrary aggregated data feeds to be curated permissionlessly, which enables users to self-govern the data feeds they use.
Service coverage, Omnimarket
There are two services that the core technical team has been working on that had to take a backseat in favor of OEV. The first of these is service coverage, which was introduced in the whitepaper as a comprehensive security mechanism for data feeds. Considering that this will be a service offered to data feed users, we decided that it makes more sense to deliver it after the traction that OEV will provide. Accordingly, the dAPI team moved it to Phase 4 (after OEV), and the core technical team stopped the work on it. That being said, we currently have no unknowns about the implementation, and delivery won’t be an issue once we find the opportunity.
Omnimarket is an unannounced project that addresses some of the issues I’ve raised in this article. It also complements the upcoming ChainAPI implementation. However, considering the big overlap between the people working on Omnimarket and the people who could have been working on OEV, we froze the development of Omnimarket. This is convenient because Omnimarket depends on the delivery of a ChainAPI proposal that is yet to be made to reach its full potential. While we’re working on OEV, we’ll guide ChainAPI into designing their deliverables for the next proposal to address the needs of Omnimarket.
Conclusion
The core technical team is now very sparse, and depends on the other technical teams for execution. Accordingly, our job is now mostly coordination and facilitation, and these reports will increasingly touch on what the other teams are doing, and try to frame them into a meaning. This shouldn’t be seen as a claim on what the other teams are working on; each team should be given credit for their contributions and supported accordingly through the governance functions.