We have been working on Airnode v0.3 during November and you can expect it to be released in the following days. We had two main objectives for this release:
- Allow the requester to specify an arbitrary single-level object to be returned (instead of a single point of data)
- Support Google Cloud Platform (GCP) for the serverless configuration
We managed to hit these objectives and a few more, which we’ll discuss in this post.
Say you want an oracle to call https://www.crypto-api.com/markets?from=ETH&to=USD and return you the price of ETH-USD and the total volume of the market. There are commonly two options:
- Make an oracle request for the price and the volume separately
- Implement a specialized, hard-coded adapter that will give the price and the volume fused together
(1) is bad because it’s unwieldy and expensive. (2) is bad because it requires this adapter to be developed on a case-by-case basis. Our goal is to develop an oracle protocol that supports seamless and flexible API integrations to smart contracts, which includes protocolizing how the API response needs to be processed. Allowing requests to be specified to return encoded objects instead of a single point of data is the first step of these efforts.
Starting from v0.3, the requesters can specify that they want Airnode to make a specific API call and use the returned JSON to build any single-level object, which will be returned to the chain by the Airnode and decoded there. This is much more flexible (in that the requester is not limited to the existing adapters) and scalable (in that you don’t need someone to pre-build and deploy purpose-specific adapters) compared to existing methods, and should be enough by itself to serve the majority of the potential oracle use-cases. However, we will be continuing our efforts in extending the Airnode protocol for better flexibility in this regard in the following versions.
We have mentioned that we switched to a pure-Terraform configuration in v0.2 that will allow us to port Airnode to various cloud providers more easily, and in a more maintainable and secure way. This was followed through by extending the cloud provider options for the serverless configuration from the existing AWS to GCP.
Note that this is not necessarily a matter of either/or. The Airnode protocol is uniquely designed to be able to be served by multiple independent deployments for optimal uptime. This means an Airnode operator can use serverless configurations deployed on AWS and GCP simultaneously, and in fact, this is the recommended setup. In this way, providing GCP (or rather, a second cloud provider in addition to AWS) support fulfills a critical step in allowing unbreakable Airnodes to be deployed. We are planning to extend support to Azure and potentially other cloud providers in the future.
Stress tests and adjustments
In a first-party oracle solution, an API is served only by a single oracle, operated by the API provider. This means there is no node-level redundancy, resulting in maximal cost-efficiency. However, this also means that we can’t depend on node-level redundancy for availability and have to build a truly highly available node (even if it’s not actively monitored and maintained by dedicated personnel). Recall that the Airnode protocol allows each requester to specify a different wallet to fulfill their requests, which enables oracles with infinite individual bandwidth. However, the node implementation should also be designed and built in a way to support this, which is one of the reasons why the recommended configuration is serverless.
To fulfill our vision of scaling up to fulfill any and all API connectivity needs, Airnode must be accessible in a permissionless, programmatic way. This is a unique goal in the oracle space, and poses a specific challenge: We can’t depend on access restriction as a security measure, i.e., anyone will be able to spam an Airnode on-chain after programmatically buying access, and Airnode should be built in a way that this is inconsequential. The current serverless configuration gives us the tools to achieve this, yet this is still a lofty goal that needs to be attended to specifically.
While developing this version, we conducted stress tests in various environments to assess the limits of the current implementation. Based on our findings, we made adjustments to our serverless configuration that greatly improved the resiliency of the node to the degree that it’s already better than the available alternatives. However, we have a few further goals around this matter that we want to fulfill in the upcoming versions:
- Improve the node architecture so that it can scale limitlessly (in practice, up to the limit imposed by the cloud provider on an account-basis)
- Allow the user to quantify the capacity they want to allocate to specific chains, and this capacity to be guaranteed by the node
- Come up with an optimal configuration that we can recommend to serve use-cases that will secure an arbitrarily large amount of value
In addition to the above, we implemented an image that wraps the airnode-admin CLI, mainly to allow the API providers to generate a mnemonic for their Airnodes. Features described above are demonstrated with example projects under the airnode-examples package, and our documentations are extended with these features. We recently planned the upcoming v0.4 according to the strategic needs of the DAO, and will start work on it soon.
If you haven’t noticed, we updated our whitepaper to v1.0.2. This was a long way coming, as the described staking reward mechanism was outdated, and despite v1.0.1 linked to a post that went through the planned updates, some readers overlooked that and were confused. This update worked the contents of that post into the whitepaper, while removing the outdated content such as the scheduled staking reward graph. What was particularly rewarding was realizing that we had already delivered a lot of the things that we said we would deliver in the whitepaper, and the respective statements needed to be rephrased to reflect that. Changing “we will do”s to “we did”s is the best kind of update a whitepaper can get, and we’re looking forward to doing more of this in the following months.