Running Laravel 7.x/8.x on Heroku
There are quite a few resource available on how to run Laravel on Heroku, but most of them only demonstrate the basics. When moving a fairly complex and big Laravel project to a Heroku infrastructure, you are bound to run into some problems. In this article I’ve listed my challenges, and how I fixed them.
TL;DR: most of it is pretty straightforward, but Heroku’s ephemeral filesystem can be a major annoyance. It had me rebuild quite a lot of code that was depending on the presence of files. Also, if you use queue:work, there is some additional setup to do.
Setting up the basics
There are some excellent resources that helped me to get up and running:
- Getting started with Laravel (heroku.com)
- Setting config vars in Heroku (Elusoji Sodeeq)
- How to deploy a Laravel/Vue App to Heroku (note that this one didn’t quite work in my case, I’ve added my notes below).
Deploying a branch to Heroku
The first thing I ran in to when building was that I wanted to do all the changes for supporting Heroku in a separate feature branch. But all manuals only describe how to push your master branch to Heroku. If you’re not too familiar with git and multiple remotes, remember that you can use the following:
git push heroku feature/heroku:master
This will push your local ‘feature/heroku’ branch to Heroku.
Build script
Every time you deploy to Heroku, your app will be built. Since most Laravel apps contain both PHP code (with composer.json managing dependencies) and Javascript code (with package.json managing dependencies), you will also require the NodeJS buildpack:
heroku buildpacks:add heroku/nodejs
Now in your package.json, make sure to include:
"scripts": {
"heroku-prebuild": "npm install -f --no-optional",
"heroku-postbuild": "npm run prod"
},
"engines": {
"node": "14.x"
}
The ‘prebuild’ script will install all NPM requirements, but excludes the optional packages. I ran into problems with the ‘fsevents’ library that is required on OSX, but failed to install on the Heroku linux stack. Figuring this out took a good few hours :-). But now, the build script kindly shows that it is skipping fsevents:
remote: >fsevents@1.2.13 install /tmp/build/node_modules/fsevents
remote: >node install.js
remote:
remote: Skipping 'fsevents' build as platform linux is not supported
Loading extra modules
It can be that you require some extra PHP modules to be loaded for your application to work correctly. In my case, part of my code was depending on the ability to read SQLite3 databases.
To tell Heroku to load extra PHP modules into apache, you need to add them to your composer.json ‘require’ element:
"require": {
"ext-pdo_sqlite": "*",
}
Changes to your Laravel App
The basics are set: we know how to push our local development branch to Heroku, and how to build and release our app.
Now we can focus on changes we need to make to our architecture, configuration and even code to make Laravel work on the containerised infrastructure of Heroku.
To MySQL or to PostgreSQL, that’s the question
While there are Heroku add-ons that allow for MySQL usage (which probably most Laravel projects use), the most widely used database on Heroku projects is Postgres. So I opted for that; and it’s quite simple to switch:
- In your heroku dashboard, add the Heroku Postgres add-on
- In the add-on detail dashboard, go to ‘settings’ and then to ‘Database Credentials’. Use the information there to fill your DB_* environment keys. Note that you can do that by hand, or create a .env file and push it to Heroku using Elusoji Sodeeq’s excellent python script.
For me, the transition to PGSQL was flawless, but if you run into issues I’d love to hear and add solutions to this article.
Force SSL
Somehow, I haven’t figured out why exactly, Laravel on Heroku decided to serve all my static resources through ‘http’ instead of ‘https’. The simple fix is to force the HTTPS scheme, which is always a good idea. You can do that with the following code in your AppServiceProvider:
<?phpnamespace App\Providers;
use Illuminate\Support\ServiceProvider;class AppServiceProvider extends ServiceProvider
{
/**
* Bootstrap any application services.
*
* @return void
*/
public function boot()
{
\URL::forceScheme('https');
}
}
Now all your resources will always be served over HTTPS.
Filesystem usage
I’ve saved the best for last: the Heroku Ephemeral Filesystem. One of the key characteristics of the Heroku platform is that a Dyno (which is the instance that is executing your Laravel code) has a local filesystem that you cannot rely on to keep your files. Heroku’s definition:
Each dyno gets its own ephemeral filesystem, with a fresh copy of the most recently deployed code. During the dyno’s lifetime its running processes can use the filesystem as a temporary scratchpad, but no files that are written are visible to processes in any other dyno and any files written will be discarded the moment the dyno is stopped or restarted. For example, this occurs any time a dyno is replaced due to application deployment and approximately once a day as part of normal dyno management.
Now this is a major issue if you’re used to just building your app for a VPS context where you can use the filesystem.
No more ‘file’ driver
There are several Laravel key components that by default use the filesystem to store information:
- Session Driver (config/session.php)
- Cache Driver (config/cache.php)
This is an excellent moment to move away from these defaults, and use alternatives. For the Session Driver, you can very easily start with storing those in your PostgreSQL database:
php artisan session:table
php artisan migrate
Note: do NOT execute these manually on a Dyno, but commit the changes to git, deploy, then run ‘heroku run php artisan migrate’. The generated migration must be in your git repository if you ever want to rollback the migration.
Note that memcached or redis are also good options. I don’t have experience with those, but you can find more information here and here.
Note that for caching, it might be okay to still use the ephemeral filesystem, depending on what you are caching. If every Dyno worker can have it’s own cache and no sharing is required, you might be okay with the default. However, if your app depends on a shared cache, you should also move your cache to either database, redis or memcached.
Logging
If you don’t change your logging driver, you will end up with logfiles written to local disks that you don’t have access to. The best way to deal with logging is to set the LOG_CHANNEL environment variable to ‘errorlog’:
heroku config:set LOG_CHANNEL=errorlog
Now you can tail the logs directly using:
heroku logs -t
Use cloud storage
If your code is still using old ‘fopen()’ constructs and not leveraging the Laravel Filesystem abstractions, this is an excellent moment to do so. Moving from local file storage to S3 for instance is a breeze. Just make sure to add the following requirements to your composer.json:
"require": {
"league/flysystem-aws-s3-v3": "^1.0",
"league/flysystem-cached-adapter": "^1.1"
}
However, there are moments where you still might need to use the local filesystem. In my case, we were using a custom implementation of the PSR-16 cache interface backed by a SQLIte3 file, that needs to persist between queue jobs. But you have no guarantee that the file will be available, so we ended up syncing files from S3 to the local filesystem (and back). The following snippet of code might help you if you run in to the same problem:
/**
* Sync files from local cache dir to S3
*/
public function syncToS3()
{
$files = Storage::disk('local-cache')->files($this->subdir);
foreach ($files as $file) {
if (Storage::disk('s3')->has($file)) {
Storage::disk('s3')->delete($file);
}
Storage::disk('s3')->writeStream($file,
Storage::disk('local-cache')->readStream($file)
);
}
}/**
* Sync files from S3 to local, but skip any files that we
* already have that are newer than the S3 version.
*/
public function syncFromS3()
{
$files = Storage::disk('s3')->files($this->subdir);
foreach ($files as $file) {
if (Storage::disk('jobs')->has($file)) {
if (Storage::disk('s3')->lastModified($file) >
Storage::disk('local-cache')->lastModified($file)) {
Storage::disk('local-cache')->delete($file);
Storage::disk('local-cache')->writeStream($file,
Storage::disk('s3')->readStream($file)
);
} else {
// ignore, because S3 file is older than local file
}
} else {
Storage::disk('local-cache')->writeStream($file,
Storage::disk('s3')->readStream($file)
);
}
}
}
Note that this is just an example, in most cases it would be better to use a shared service like Redis or a database to persist stateful information between workers.
Laravel Queues
One of the nicer abstractions within the Laravel framework is the concepts of queues and workers: you can push resource-intensive work into a queue and let workers (you can have multiple) pick up these jobs and process them. For instance:
- When the runtime of a process exceeds a few seconds: you should use an async programming model.
- When you are doing a lot of callouts to external API’s that might time out.
- When a process is very memory-intensive: it’s much cleaner to keep your web PHP processes with a low memory limit and only use large memory limits on worker instances.
If you are using queues and workers, you might be tempted to use the following Procfile to start up a ‘web’ dyno and a ‘worker’ dyno:
web: vendor/bin/heroku-php-apache2 public/
worker: php artisan queue:restart && php artisan queue:work
This will however only spawn one worker bee on your worker dyno, not using all it’s resources. If you have a memory intensive worker that uses 256Mb memory, but a Dyno with 1Gb, you should start up 4 workers sos jobs can be executed in parallel.
Using supervisor
On a Linux-based VPS, you would use Supervisor to manage your queue workers for you. Fortunately, I came across a blog from Danny NG of a couple of years ago, detailing how to run Lavarel 5 with Supervisor on Heroku. I’ve adapted his recommendations to Laravel 7/8 below.
We start by adding the python buildpack as the first buildpack; the order is important because PHP needs to be the last one:
heroku buildpacks:add --index 1 heroku/python
Verify your buildpacks using:
heroku buildpacks
Now there are several files to create.
requirements.txt
This file should contain only one line, containing:
supervisor
supervisor.conf
This is the main configuration file, where the following sections are important:
[supervisord]
logfile = /tmp/supervisord.log
logfile_maxbytes = 50MB
logfile_backups=10
loglevel = info
pidfile = /tmp/supervisord.pid
nodaemon = true[inet_http_server]
port=127.0.0.1:9001[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface[supervisorctl]
serverurl=http://127.0.0.1:9001[include]
files = laravel-worker.conf
laravel-worker.conf
This one contains our worker details:
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /app/artisan queue:work --memory=64
autostart=true
autorestart=true
numprocs=8
redirect_stderr=true
stdout_logfile=/app/worker.log
Some important notes here:
- You probably have extra arguments for the queue:work command, like the number of tries, timeout, or queue to listen to. I’ve kept it short for readability.
- You can tweak the ‘ — memory’ and the ‘numprocs’ together: if you multiply them you should get the memory available in your Dyno. For instance on a 512Mb dyno, you can use 8 workers using 64Mb, or 4 workers using 128Mb.
Procfile
Now it all comes together: add the following line to your Procfile to start up supervisor in the foreground (not as a daemon) using your configs.
supervisor: supervisord -c supervisor.conf -n
Now commit your files to git, push it to Heroku and tail the heroku logs:
heroku logs --tail
Web memory usage and concurrency
Finally, it’s time to do some performance tweaking to get the most out of our Dyno’s. For this, we can use some excellent advice from the Heroku Devcenter regarding PHP Concurrency.
If you offload most of the resource intensive work to workers using Laravel Queues, you can probably do with a 64Mb memory limit for normal web requests; depending on the complexity of your Laravel app ofcourse.
So let’s assume we have a Dyno with 512Mb memory, that would allow us to perform 8 concurrent request on our Dyno.
Setting the memory limit for PHP processes
There are multiple ways of setting the memory limit on Heroku, either through a ‘.user.ini’ file or through a custom php-fpm config file. Because we are using Queue Workers as well, and those have their own memory settings, it’s a good idea to use the php-fpm custom config file in this case. That will make sure that we won’t influence other PHP processes with a global setting.
The contents for the file fpm_custom.conf should be:
php_value[memory_limit] = 64M
Now you can update your Procfile:
web: vendor/bin/heroku-php-apache2 -F fpm_custom.conf public/
Setting the WEB_CONCURRENCY
By default, Heroku will calculate the concurrency setting automatically for you, by dividing the available memory by the php memory limit. However, there might be scenarios where this calculation fails.
So far we have used 3 buildpacks to build our Laravel app, and those together can mess up the concurrency settings. For instance, you might see this in your logs:
app[web.1]: $WEB_CONCURRENCY env var is set, skipping automatic calculation
app[web.1]: Starting php-fpm with 1 workers...
This is a waste of resource, because now you have only 1 PHP process running while there could be 8!
In this case, we would need to set the WEB_CONCURRENCY manually:
heroku config:set WEB_CONCURRENCY=8
Note that if you do it manually, you will need to update this environment variable when you scale your Dyno to a different size. Moving to a Dyno with 1Gb would allow for a concurrency of 16, but you have to update the config setting manually in this case!