Bitfinex api limitations


#1

Bitfinex api rate limits break even the smallest universe

In backtest mode no problem but as soon as you try run the universe example to create a strategy and try run that live it fails. I spent hours trying to figure a way around it and the best i’ve come up with is to restrict the universe to run in groups, 2 tickers per minute. Which then means it takes about 18 minutes to process the USD pairs which renders it pretty much useless for indicators on timeframes less than 30 minutes… queue problem 2

Catalyst/cctx/bitfinex will not return data for 1hr, 2hr, 4hr etc https://github.com/enigmampc/catalyst/issues/273

How the hell can we get hold of minute data for bitfinex for all pairs?

  • Use a custom CSV file?.. looks like backtest only
  • Setup something on data marketplace… not live, i just tried
  • Proxy all bitfinex api requests for candle data to a service/script which takes care of proxying the more than 60 api requesets per minute required, through different hosts…?
  • Proxy all bitfinex api requests for candle data to a service/script which has a server side implementation of bitfinex v2 websockets api for all coins

Any other ideas?

I’ve spent months working on strategies and all i come up against is problems and still nothing running live or paper trading


#2

Hi @simsurf, When it comes to the bitfinex’s api, it is kind of tough, due to the low rate limit. we will try to lower the amount of requests on our side (perhaps, to enable a request of all fields at once on get_history), in the next couple of days, but I’m not sure this will completely solve this issue.
from what I know, other exchanges have much higher rates, if it is not critical for you to change for now…


#3

To allow me to process 2 tickers per minute that’s pretty much what I’ve done (See code below). In your example demos you do separate calls to each of the ‘high’, ‘low’ etc data points although im not sure if that does result in separate calls to the finex api or if you deal with that by just reusing the initial data you fetched from the api.

If below is more efficient then you should maybe add into some examples to reflect that or the wiki at least. Ultimately what should happen is the first history call gets all the data it needs to fulfill that request then on subsequent history calls if that previous call contains data that can be grouped up to serve the new request.

I’ve hacked the Catalyst code base for now to allow me to use hourly data on finex live trading. Not ideal but at least i can run live 1hr or 3hr indicators. I’m hoping the data market place comes online sharpish and i can hook up a data source with better exchange data

thedata = data.history(coin,
                                        ['open','high','low','close','volume'],
                                        bar_count=280,
                                        frequency='180T') #3h on live or T everywhere else
thedata['close'].values

#4

if I understood you correctly, that is a change that needs to be done… when you call get_history with multiple fields, in the current code, the function, still sends a request for each of the fields individually, but hopefully we will be able to modify it and send one request as you are proposing and you will be able to receive a bit more data in every iteration.


#5

‘open’,‘high’,‘low’,‘close’,‘volume’ all come from bitfinex in a single api call so i hope you dont mean behind the scenes currently you do an api call for each of those fields! specifiying all those fields in a single call seems to work and maybe you should add code to show people thats how you can do the call in a more efficient way if you need all the data points (‘open’,‘high’,‘low’,‘close’,‘volume’)

It’s the frequency that might need additional coding to be more efficient (possibly, as im not 100% sure what api calls are made behind the scenes in catalyst). I think it should work in the way that if you call data.history(…frequency=‘15T’…) in the first instance then subsequently call data.history(…frequency=‘240T’…) then regardless of the fields you request that second history request should just group up the data points from the initial call as long as the data fits, i.e. the first call needs to have enough data, 1M to 1D wouldn’t fit so would need an additional api call, 15T to 60T would be ok etc…