I've heard feedback from many that market making has gotten more competitive on Hyperliquid recently. This is great for end users as it means liquidity is deeper and more robust. Fairness is a core principle of Hyperliquid in every dimension. For market makers, this means that the blockchain is designed with equal access in mind. However, “equal” does not mean “easy.” As the protocol continues to scale to meet increasing demand, it becomes important to understand the intricacies of Hyperliquid's custom infrastructure that powers the largest permissionless trading venue. For a long time, the primary alpha was simply integrating Hyperliquid. The API is designed to abstract away most blockchain complexities for new users, allowing automated traders to port over strategies with ease. While the ease of onboarding has not changed, the importance of latency has increased. The most responsive strategies require the best client-side infrastructure. The overarching principle guiding latency optimization is that Hyperliquid is a blockchain, not a CEX: 1. The fastest data comes from running a node. - Action item: run against a reliable, static peer. For example, the Hyper Foundation offers a peer for non-validators to connect to. 2. The most complete data comes from running a node. Every transaction is streamed in real-time, and the node offers various formats for ingestion. - Note: run node with output buffering disabled. 3. Nodes execute the entire blockchain, no small feat of engineering and in stark contrast to CEXs where client-side code only processes the user’s state. Hyperliquid nodes execute and verify the entire blockchain state including HyperCore and HyperEVM on a single machine. While the execution can keep up with moderate hardware specs, the latency improvements scale with the number of cores because a significant portion of execution is parallelizable. - Action item: the data exposed by the node is rich and allows much more insight into the full blockchain state. In addition to local API servers, a full L4 book can be built. See example implementation. - Action item: users can see significant gains up to 32 cores, with additional diminishing returns above that. 4. Cancelations are designed to have a high success rate. The first order effect here is built into the HyperBFT's mempool prioritization of cancels. However, some users find further optimizations helpful. For example, an in-flight GTC order can be "canceled" while still in the mempool by invalidating the nonce used in the order.
See for further details
59,28K