Using multiple databases in Laravel for all queries

As we continue to grow our hosted Invoice Ninja platform we’ve been been looking into both vertical scaling [bigger servers] and horizontal scaling [more servers].

Initially we added more web servers using a load balancer. This was relatively easy to setup and as an added bonus provided higher availability. Over time however we’ve started to see our single database server experience increased memory usage, particularly after releasing our mobile apps.

Our plan was to simply divide our users across multiple databases. One database would function as the lookup server, the other databases would store the actual data.

Searching Google for ‘multiple databases in Laravel’ mainly returns examples of either specifying the connection for the particular query:


Or defining it in the model by setting:

protected $connection = ...;

However, if you’d like to use a different connection for all queries you can use:

config(['database.default' => ...]);

For our implementation we created a middleware class to find the database server with the provided credentials. The server is stored in the cache so the lookup is only required once every two hours, this is particularly helpful for the API where there’s no session.

To keep the lookup tables up to date we’re using Eloquent creating and deleted model events. Finally, to ensure our queued jobs use the right database we’re using Queue::before(...) to set it.

7 thoughts on “Using multiple databases in Laravel for all queries”

  1. Why can’t you split it into multiple, independent installations with independent load balancers/web/DB servers? In case of the data centre outage, only some of your clients won’t be able to access their data rather than all of them.

    1. Do you mean create a separate database for each customer? We considered it but it could become a challenge managing all of the databases, for example running migrations.

      We’re setting up master/slave database replication to provide fail-over in case of a server failure. In the past I’ve had bad experiences with replication, here’s hoping it goes better this time 🙂

      1. Not necessarily DB per client but let’s say DB per 20-30 clients or so. Personally I have never had problems with postgresql when it comes to replication. Some of our databases were around 800GB and streaming replication worked perfectly. Automatic master/slave switchover is also a breeze. Last time I tried MySQL replication was 4-5 years ago and it was never as good as postgtes but I have no idea how robust it is today.

  2. In my humble opinion Users table needs to be in a single database because of the needs,

    But there is other kinds of data more stable like entities, could be everything (letters, contacts, products,..) that can be built in dedicated databases following models

    I am implementing an application where all the database fields type,properties are included in a JSON file with data for each entity(letters, contacts, products,..),

    As such i am able to create from JSON files with no databases at all as well as create JSON from database

    I did a matrix models definition for this, also in a JSON file

    If you are interested, tell me

    1. Is there a reason you think the users table needs to be in a single database?

      We have a lookup_users table which has the email address and server and then the full user details are in the database specific table.

      If the users table was in a separate database from the entities some basic queries would require cross-database joins.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s