Setting the rate limit in Ruby on Rails

Rate limit in Ruby on Rails

We continue the journey in the safety of API started in the last article, with a discussion on the implementation of the Rate Limit in Ruby on Rails.

The purpose of the rate limit is to avoid that a given IP address performs too many API requests in a given unit of time. So we’re not talking about brute force attack attempts that can be fought through tools such as Fail2ban, but to lawful requests, however, that exceed the maximum set by our policy.

In the implementation of the rate limit system, you must first evaluate the procedure will be performed at each call and this will certainly result in a read and, in the case of a call accepted, even in a write: so it is necessary to use an high performance database for both directions of I/O operations.  On the other hand, it is a “disposable” data, so the persistence is not necessary.

In this case we choose the great Redis, where we will store the IP address of the API caller and the number of requests using the internal function of increase; furthermore, to verify the relationship with the unit of time, we will use the internal expire function, minimizing the number of operations carried out by Ruby on Rails.

The simplest technique to implement the rate limit is to use an ad hoc method in the application controller and associate it with before_filter in order to run this for each call. This technique, however, does not allow you to edit the header of a valid request to inform the client about the number of remaining, so you might add a second method associated with a after_filter, though it would become too cumbersome.

We provide you with an implementation that takes its inspiration from this article that is based on an alternative approach: the rack midlleware.

# app/middleware/rate_limit.rb

class RateLimit
  def initialize(app)
    @app = app
  end

  def call(env)
    client_ip = env["action_dispatch.remote_ip"]
    key = "count:#{client_ip}"
    count = REDIS.get(key)
    unless count
      REDIS.set(key, 0)
      REDIS.expire(key, Settings.throttle_time_window)
    end

    if count.to_i >= Settings.throttle_max_requests
      [
        429,
        rate_limit_headers(count, key),
        [message]
      ]
    else
      REDIS.incr(key)
      status, headers, body = @app.call(env)
      [
        status,
        headers.merge(rate_limit_headers(count.to_i + 1, key)),
        body
      ]
    end
  end

  private
  def message
    {
      :message => "You have fired too many requests. Please wait for some time."
    }.to_json
  end

  def rate_limit_headers(count, key)
    ttl = REDIS.ttl(key)
    time = Time.now.to_i
    time_till_reset = (time + ttl.to_i).to_s
    {
      "X-Rate-Limit-Limit" => Settings.throttle_max_requests,
      "X-Rate-Limit-Remaining" => (Settings.throttle_max_requests - count.to_i).to_s,
      "X-Rate-Limit-Reset" => time_till_reset
    }
  end
end

Also you need to edit the file config/application.rb to connect the middleware:

# config/application.rb

class Application < Rails::Application
  ...
  config.middleware.use "RateLimit"
end

Compared to the version proposed in the original, we changed the environment variable with the IP address of the client from “REMOTE_ADDR” to action_dispatch.remote_ip”. The reason is that in my system configuration, Ruby on Rails runs by the webserver Puma (JRuby) behind Nginx: With this system configuration, the REMOTE_ADDR variable is populated with the NGINX IP (127.0.0.1, in my case) and not with the actual IP address of the client that originates the call.

Leave a Reply

Your email address will not be published. Required fields are marked *

11 + fifteen =