-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
0xbranded - Fee Precision Loss Disrupts Liquidations and Causes Loss of Funds #66
Comments
Escalate This is a system design choice. It just charges less fees, there is no loss of funds happening to the protocol. Also this can be modified by updating parameters. |
You've created a valid escalation! To remove the escalation from consideration: Delete your comment. You may delete or edit your escalation comment anytime before the 48-hour escalation window closes. After that, the escalation becomes final. |
It's not a design choice, this is clearly a precision loss issue. There are real loss of funds and delayed liquidation that consistently occur given the real world parameters such as blocktime. these are clearly documented in the PoC with specific errors exceeding 10% consistently, and sometimes even total loss of fee payment. DENOM is a constant that cannot be updated. It must be corrected by increasing the decimals of precision. Given the high likelihood and high impact of loss of funds not only from fee miscalculation but delayed/avoided liquidations, this is a high severity issue. |
@msheikhattari - The logic only modifies the fee ratio by rounding it down, if the ratio doesn't meet what the protocol team is looking for, they can simply increase the numerator by updating parameters. |
No parameters can be updated to fix this issue - DENOM must directly be changed (which is a constant). It currently supports annual fees of about 1.5% increments which is way too much precision loss with errors consistently exceeding 10% in all fee calculations. The numerator is blocks * rate, the point is that the rate component cannot support fees with a precision below 1.5% because of the DENOM parameter that is too small. I included a PoC which clearly demonstrates this currently unavoidable precision loss. |
I read your PoC,root cause in this report is here which mentioned in this report and borrowing_paid calculation doesn't have effect def utilization(reserves: uint256, interest: uint256) -> uint256:
return 0 if (reserves == 0 or interest == 0) else (interest / (reserves / 100)) |
Though, this may be considered a design decision, the calculation of fees still has precision loss which would pay less fees due to rounding down. Hence, this should've been included in the known issues question in the protocol's README, but it wasn't. Also, Hence, this should remain a valid issue. Planning to reject the escalation. |
Given that significant disruptions to liquidations and loss of funds result, this issue should in consideration for high severity. |
Agree with the above, as I understand, there are no extensive limitations, and the loss can be significant. Planning to reject the escalation but make the issue high severity. |
root cause in this issue and #72 is same |
@WangSecurity - Even though |
@KupiaSecAdmin I agree with u ,this issue talks about two part
second one:
first one has precision loss but second one doesn't have |
@rickkk137 - Regarding first one, isn't this issue same? #87 |
@KupiaSecAdmin no,I don't think so, imo #87 is not possible i wrote my comment there |
Please refer to the PoC, the issue is currently unmitigatable due to the precision of all fees that result from the |
I believe @rickkk137 is correct that this and #72 have the same root cause and explain the same problem. The escalation will still be rejected, and #72 + #60 (duplicate) will be duplicated with this report because it goes more in-depth in explaining the issue and shows a higher severity. |
#72 is describing a different issue. It doesn't mention the DENOM parameter at all and should not be duped with this one. That issue is talking about setting min values for |
Yes, excuse me, focused too much on the utilisation part. |
|
Good question, but #72 and #60 identified only one source of precision loss, so the following should apply:
That's the reason I think they should be treated separately. The decision remains, reject the escalation, increase severity to high. |
the watson passed util2 to denom.scale2 function in his PoC and that made huge difference and utilization2 function is custom function has written by watson |
Yes, these combined sources of precision loss were demonstrated to have a far more problematic impact as shown in the PoC. |
But, as I understand from @rickkk137, his main point is without the precision inside the Utilisation, the loss would be small and not sufficient for medium severity. Is it correct @rickkk137 ? |
Yes,correct |
The precision loss resulting from DENOM is more significant than that from the utilization. The PoC and described scenario in the issue provides exact details on the loss of each. When combined, not only is the precision loss more severe but also more likely to occur. |
But as I see in #126, the precision loss from Denom is not that severe, is it wrong? |
That issue is slightly different, but what was pointed out here is that the most granular annual fee representable is about 1.6% - these are the intervals for fee rates as well (ex. 2x 1.6, 3x 1.6...) Utilization on the other hand experiences precision loss of 1% in the extreme case (ex 14.9% -> 14%) So in absolute terms the issue arising from DENOM is more significant, when combined these issues become far more significant than implied by their nominal values, not only due to multiplied loss of precision but increased likelihood of loss (if precision loss from one source bumps it just over the boundary of another, as outlined in the PoC) |
Yeah, I see. #126 tries to show a scenario where DENOM precision loss would round down the fees to 0, and for that to happen, the fees or collateral have to be very small, which results in a very small loss. But this issue just shows the precision loss from DENOM and doesn't try to show rounding down to 0. That's the key difference between the two reports. Hence, my decision remains that this will remain solo with high severity as expressed above. Planning to reject the escalation. The decision will be applied tomorrow at 10 am UTC:
|
Result: |
Escalations have been resolved successfully! Escalation status:
|
The protocol team fixed this issue in the following PRs/commits: |
0xbranded
High
Fee Precision Loss Disrupts Liquidations and Causes Loss of Funds
Summary
Borrowing and funding fees of both longs/shorts suffer from two distinct sources of precision loss. The level of precision loss is large enough to consistently occur at a significant level, and can even result in total omission of fee payment for periods of time. This error is especially disruptive given the sensitive nature of funding fee calculations both in determining liquidations (a core functionality), as well as payments received by LPs and funding recipients (representing a significant loss).
Vulnerability Detail
The first of the aforementioned sources of precision loss is relating to the
DENOM
parameter defined and used inapply
ofmath.vy
:This function is in turn referenced (to extract the
fee
parameter in particular) in several locations throughoutfees.vy
, namely in determining the funding and borrowing payments made by positions open for a duration of time:The comments for$10^{-9}$ , the smallest representable fee per block is $10^{-7}$ %. Given the average blocktimes of 2.0 sec on the BOB chain, there are 15_778_800 blocks in a standard calendar year. Combined with the fee per block, this translates to an annual fee of 1.578%.
DENOM
specify that it's a "magic number which depends on the smallest fee one wants to support and the blocktime." In fact, given its current value ofHowever, this is also the interval size for annualized fees under the current system. As a result, any fee falling below the next interval will be rounded down. For example, given an annualized funding rate in the neighborhood of 15%, there is potential for a nearly 10% error in the interest rate if rounding occurs just before the next interval. This error is magnified the smaller the funding rates become. An annual fee of 3.1% would round down to 1.578%, representing an error of nearly 50%. And any annualized fees below 1.578% will not be recorded, representing a 100% error.
The second source of precision loss combines with the aforementioned error, to both increase the severity and frequency of error. It's related to how percentages are handled in
params.vy
, particularly when the long/short utilization is calculated to determine funding & borrow rates. The utilization is shown below:This function is in turn used to calculate borrowing (and funding rates, following a slightly different approach that similarly combines the use of
utilization
andscale
), in[dynamic_fees](https://github.com/sherlock-audit/2024-08-velar-artha/blob/main/gl-sherlock/contracts/params.vy#L33-L55)
ofparams.vy
:Note that
interest
andreserves
maintain the same precision. Therefore, the output ofutilization
will have just 2 digits of precision, resulting from the division ofreserves
by100
. However, this approach can similarly lead to fee rates losing a full percentage point in their absolute value. Since the utilization is used bydynamic_fees
to calculate the funding / borrow rates, when combined with the formerly described source of precision loss the error is greatly amplified.Consider a scenario when the long open interest is 199_999 *$10^{18}$ and the reserves are 10_000_000 * $10^{18}$ . Under the current
utilization
functionality, the result would be a 1.9999% utilization rounded down to 1%. Further assuming the value ofmax_fee = 65
(this represents a 100% annual rate and 0.19% 8-hour rate), the long borrow rate would round down to 0%. Had the 1.9999% utilization rate not been rounded down 1%, the result would have beenr = 1.3
. In turn, the precision loss inDENOM
would have effectively rounded down tor = 1
, resulting in a 2.051% borrow rate rounded down to 1.578%.In other words, the precision loss in
DENOM
alone would have resulted in a 23% error in this case. But when combined with the precision loss in percentage points represented inutilization
, a 100% error resulted. While the utilization and resulting interest rates will typically not be low enough to produce such drastic errors, this hopefully illustrates the synergistic combined impact of both sources of precision loss. Even at higher, more representative values for these rates (such asr = 10
), errors in fee calculations exceeding 10% will consistently occur.Impact
All fees in the system will consistently be underpaid by a significant margin, across all pools and positions. Additionally trust/confidence in the system will be eroded as fee application will be unpredictable, with sharp discontinuities in rates even given moderate changes in pool utilization. Finally, positions will be subsidized at the expense of LPs, since the underpayment of fees will make liquidations less likely and take longer to occur. As a result, LPs and funding recipients will have lesser incentive to provide liquidity, as they are consistently being underpaid while taking on a greater counterparty risk.
As an example, consider the scenario where the long open interest is 1_099_999 *$10^{18}$ and the reserves are 10_000_000 * $10^{18}$ . Under the current
utilization
functionality, the result would be a 10.9999% utilization rounded down to 10%. Assumingmax_fee = 65
(100% annually, 0.19% 8-hour), the long borrow rate would ber = 6.5
rounded down tor = 6
. A 9.5% annual rate results, whereas the accurate result if neither precision loss occurred isr = 7.15
or 11.3% annually. The resulting error in the borrow rate is 16%.Assuming a long collateral of 100_000 *$10^{18}$ , LPs would earn 9_500 * $10^{18}$ , when they should earn 11_300 * $10^{18}$ , a shortfall of 1_800 * $10^{18}$ from longs alone. Additional borrow fee shortfalls would occur for shorts, in addition to shortfalls in funding payments received.
Liquidation from borrow rates along should have taken 106 months based on the expected result of 11.3% per annum. However, under the 9.5% annual rate it would take 127 months to liquidate a position. This represents a 20% delay in liquidation time from borrow rates alone, not including the further delay caused by potential underpaid funding rates.
When PnL is further taken into account, these delays could mark the difference between a period of volatility wiping out a position. As a result, these losing positions could run for far longer than should otherwise occur and could even turn into winners. Not only are LP funds locked for longer as a result, they are at a greater risk of losing capital to their counterparty. On top of this, they are also not paid their rightful share of profits, losing out on funds to take on an unfavorably elevated risk.
Thus, not only do consistent, material losses (significant fee underpayment) occur but a critical, core functionality malfunctions (liquidations are delayed).
Code Snippet
In the included PoC, three distinct tests demonstrate the individual sources of precision loss, as well as their combined effect. Similar scenarios were demonstrated as discussed above, for example interest = 199_999 * $10^{18} with reserves = 10_000_000 *$10^{18}$ with a max fee of 65.
The smart contracts were stripped to isolate the relevant logic, and foundry was used for testing. To run the test, clone the repo and place
Denom.vy
in vyper_contracts, and placeDenom.t.sol
,Cheat.sol
, andIDenom.sol
under src/test.Denom.vy
Denom.t.sol
Cheat.sol
```solidity interface CheatCodes { // This allows us to getRecordedLogs() struct Log { bytes32[] topics; bytes data; }}
Tool used
Manual Review
Recommendation
Consider increasing the precision of
DENOM
by atleast 3 digits, i.e.DENOM: constant(uint256) = 1_000_000_000_000
instead of1_000_000_000
. Consider increasing the precision of percentages by 3 digits, i.e. divide / multiply by 100_000 instead of 100.Each added digit of precision decreases the precision loss by an order of magnitude. In other words the 1% and 1.5% absolute errors in precision would shrink to 0.01% and 0.015% when using three extra digits of precision.
Consult
Denom.vy
for further guidance on necessary adjustments to make to the various functions to account for these updated values.The text was updated successfully, but these errors were encountered: