This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Getting Started

Getting and installing OpenDataBio

OpenDataBio is a web-based software supported in Debian, Ubuntu and Arch-Linux distributions of Linux and may be implemented in any Linux based machine. We have no plans for Windows support, but it may be easy to install on a Windows machine using Docker.

OpenDataBio is written in PHP and developed with the Laravel framework. It requires a web server (Apache or nginx), PHP and a SQL database – tested only with MySQL and MariaDB.

You may install OpenDataBio easily using the Docker files included in the distribution. The repository now includes a docker/prod profile and docker-compose.prod.yml so Docker can also be used for production deployments (with server-specific tuning and secrets management).

If you just want to test OpenDataBio locally, follow the Docker Installation.


Next steps

  1. Apache Installation
  2. Nginx Installation
  3. Docker Installation
  4. Upgrade OpenDataBio

Prep for installation

  1. You may want to request a Tropicos.org API key for OpenDataBio to be able to retrieve taxonomic data from the Tropicos.org database. If not provided, mainly the GBIF nomenclatural service will be used;
  2. OpenDataBio sends emails to registered users, either to inform about a Job that has finished, to send data requests to dataset administrators or for password recovery. You may use a Google Email for this, but will need to change the account security options to allow OpenDataBio to use the account to send emails (you need to turn on the Less secure app access option in the Gmail My Account Page and will need to create a cron job to keep this option alive). Therefore, create a dedicated email address for your installation. Check the “config/mail.php” file for more options on how to send e-mails.

1 - First time users

Tips to first time users!

OpenDataBio is software to be used online. Local installations are for testing or development, although it could be used for a single-user production localhost environment.

User roles

  • If you are installing, the first login to an OpenDataBio installation must be done with the default super-admin user: admin@example.org and password1. These settings should be changed or the installation will be open to anyone reading the docs;
  • Self-registrations only grant access to datasets with privacy set to registered users and allows user do download data of open-access, but do not allow the user to edit nor add data;
  • Only full users can contribute with data.
  • But only super admin can grant full-user role to registered users - different OpenDataBio installations may have different policies as to how you may gain full-user access. Here is not the place to find that info.

See also User Model.

Prep your full-user account

  1. Register yourself as Person and assign it as your user default person, creating a link between your user and yourself as collector.
  2. You need at least a dataset to enter your own data
  3. When becoming a full-user, a restricted-access Dataset and Project will be automatically created for you (your Workspaces). You may modify these entities to fit your personal needs.
  4. You may create as many Projects and Datasets as needed. So, understand how they work and which data they control access to.

Entering data

There three main ways to import data into OpenDataBio:

  1. One by one through the web Interface
  2. Using the OpenDataBio POST API services:
    1. importing from a spreadsheet file (CSV, XLXS or ODS) using the web Interface
    2. using the OpenDataBio R package client
  3. When using the OpenDataBio API services you must prep your data or file to import according to the field options of the POST verb for the specific ’endpoint’ your are trying to import.

Tips for entering data

  1. If first time entering data, you should use the web interface and create at least one record for each model needed to fit your needs. Then play with the privacy settings of your Workspace Dataset, and check whether you can access the data when logged in and when not logged in.
  2. Use Dataset for a self-contained set of data that should be distributed as a group. Datasets are dynamic publications, have author, data, and title.
  3. Although ODB attempt to minimize redundancy, giving users flexibility comes with a cost, and some definitions, like that of Traits or Persons may receive duplicated entries. So, care must be taken when creating such records. Administrators may create a ‘code of conduct’ for the users of an ODB installation to minimize such redundancy.
  4. Follow an order for importation of new data, starting from the libraries of common use. For example, you should first register Locations, Taxons, Persons, Traits and any other common library before importing Individuals or Measurements
  5. There is no need to import POINT locations before importing Individuals because ODB creates the location for you when you inform latitude and longitude, and will detect for you to which parent location your individual belongs to. However, if you want to validate your points (understand where such point location will placed), you may use the Location API with querytype parameter specified for this.
  6. There are different ways to create PLOT and TRANSECT locations - see here Locations if that is your case
  7. Creating Taxons require only the specification of a name - ODB will search nomenclature services for you, find the name, metadata and parents and import all of the them if needed. If you are importing published names, just inform this single attribute. Else, if the name is unpublished, you need to inform additional fields. So, separate the batch importation of published and unpublished names into two sets.
  8. The notes field of any model is for both plain text or JSON object string formatted data. The Json option allows you to store custom structured data any model having the notes field. You may, for example, store as notes some secondary fields from original sources when importing data, but may store any additional data that is not provided by the ODB database structure. Such data will not be validate by ODB and standardization of both tags and values depends on you. Json notes will be imported and exported as a JSON string, and will be presented in the interface as a formatted table; URLs in your Json will be presented as links.

2 - Apache Installation

How to install OpenDataBio

These instructions are for an apache-based installation. For nginx, use Nginx Installation.

Server requirements

  1. The supported PHP version >= 8.2 (8.3 recommended)
  2. Web server: apache for this guide. For nginx, use Nginx Installation.
  3. It requires a SQL database, MySQL and MariaDB have been tested, but may also work with Postgres. Tested with MySQL 8.0 and MariaDB 10.6+.
  4. PHP extensions required: openssl, pdo, pdo_mysql, mbstring, tokenizer, xml, dom, gd, exif, bcmath, zip, curl, redis.
  5. Redis Server is required for queues and cache.
  6. Tectonic is used for LaTeX/PDF label generation.
  7. Pandoc is used to translate LaTeX code used in bibliographic references. It is not necessary for installation, but suggested for a better user experience.
  8. Requires Supervisor, which is needed background jobs

Create Dedicated User

The recommended way to install OpenDataBio for production is using a dedicated system user. In this instructions this user is odbserver.

Download OpenDataBio

Login as your Dedicated User and download or clone this software to where you want to install it. Here we assume this is /home/odbserver/opendatabio so that the installation files will reside in this directory. If this is not your path, change below whenever it applies.


Download OpenDataBio

Prep the Server

First, install the prerequisite software: Apache, MySQL, PHP, Redis, Tectonic, Pandoc and Supervisor. On a Debian system, you need to install some PHP extensions as well and enable them:

sudo apt-get install software-properties-common
sudo add-apt-repository ppa:ondrej/php
sudo add-apt-repository ppa:ondrej/php ppa:ondrej/apache2
sudo add-apt-repository ppa:ondrej/php
sudo add-apt-repository ppa:ondrej/apache2

sudo apt-get install mysql-server redis-server tectonic php8.3 libapache2-mod-php8.3 php8.3-intl \
 php8.3-mysql php8.3-sqlite3 php8.3-gd php8.3-cli pandoc \
 php8.3-mbstring php8.3-xml php8.3-bcmath php8.3-zip php8.3-curl php8.3-redis \
 supervisor

sudo a2enmod php8.3
sudo phpenmod mbstring
sudo phpenmod xml
sudo phpenmod dom
sudo phpenmod gd
sudo a2enmod rewrite
sudo a2ensite
sudo systemctl restart apache2.service



#To check if they are installed:
php -m | grep -E 'mbstring|cli|xml|gd|mysql|redis|bcmath|pcntl|zip'
tectonic --version
redis-server --version

Add the following to your Apache configuration.

  • Change /home/odbserver/opendatabio to your path (the files must be accessible by apache)
  • You may create a new file in the sites-available folder: /etc/apache2/sites-available/opendatabio.conf and place the following code in it.
touch /etc/apache2/sites-available/opendatabio.conf
echo '<IfModule alias_module>
        Alias /opendatabio      /home/odbserver/opendatabio/public/
        Alias /fonts /home/odbserver/opendatabio/public/fonts
        Alias /images /home/odbserver/opendatabio/public/images
        Alias /build /home/odbserver/opendatabio/public/build
        Alias /vendor/livewire /home/odbserver/opendatabio/public/vendor/livewire
        <Directory "/home/odbserver/opendatabio/public">
                Require all granted
                AllowOverride All
        </Directory>
</IfModule>' > /etc/apache2/sites-available/opendatabio.conf

This will cause Apache to redirect all requests for / to the correct folder, and also allow the provided .htaccess file to handle the rewrite rules, so that the URLs will be pretty. If you would like to access the file when pointing the browser to the server root, add the following directive as well:

RedirectMatch ^/$ /

Content Security Policy (CSP) for Apache

Configure CSP at the web server layer (not in Laravel files). Apply it first in report-only mode, inspect logs, then switch to enforced mode.

For nginx standalone installs, use Nginx Installation.

Apache: where to put it

  1. Enable required module:
sudo a2enmod headers
sudo systemctl restart apache2
  1. Edit your active vhost file (example):
sudo nano /etc/apache2/sites-available/opendatabio.conf
  1. Inside the correct <VirtualHost ...> block (HTTP and/or HTTPS), add:
Header always set Content-Security-Policy-Report-Only "
  default-src 'self';
  base-uri 'self';
  form-action 'self';
  frame-ancestors 'self';
  object-src 'none';
  script-src 'self' 'unsafe-eval';
  style-src 'self' 'unsafe-inline';
  img-src 'self' data: https://server.arcgisonline.com https://*.tile.openstreetmap.org;
  font-src 'self' data:;
  connect-src 'self';
"
  1. Reload Apache:
sudo apachectl configtest
sudo systemctl reload apache2

Subpath installs (/opendatabio)

If your installation runs under a subpath (for example http://localhost/opendatabio), set in .env:

APP_URL=http://localhost/opendatabio
ASSET_URL=http://localhost/opendatabio

Then refresh generated assets and Livewire files:

php artisan livewire:publish --assets
php artisan optimize:clear
npm run build

Notes

  1. https://server.arcgisonline.com and https://*.tile.openstreetmap.org are needed for map tiles.
  2. unsafe-inline / unsafe-eval are temporary compatibility flags; remove after hardening templates/assets.
  3. Keep Report-Only while tuning policy in production.

Configure your php.ini file. The installer may complain about missing PHP extensions, so remember to activate them in both the cli (/etc/php/8.3/cli/php.ini) and the web ini (/etc/php/8.3/fpm/php.ini) files for PHP!

Update the values for the following variables:

Find files:
php -i | grep 'Configuration File'

Change in them:
	memory_limit should be at least 512M
	post_max_size should be at least 30M
	upload_max_filesize should be at least 30M

Something like:

[PHP]
allow_url_fopen=1
memory_limit = 512M

post_max_size = 100M
upload_max_filesize = 100M

Enable the Apache modules ‘mod_rewrite’ and ‘mod_alias’ and restart your Server:

sudo a2enmod rewrite
sudo a2ensite
sudo systemctl restart apache2.service

Mysql Charset and Collation

  1. You should add the following to your configuration file (mariadb.cnf or my.cnf), i.e. the Charset and Collation you choose for your installation must match that in the ‘config/database.php’
[mysqld]
character-set-client-handshake = FALSE  #without this, there is no effect of the init_connect
collation-server      = utf8mb4_unicode_ci
init-connect          = "SET NAMES utf8mb4 COLLATE utf8mb4_unicode_ci"
character-set-server  = utf8mb4
log-bin-trust-function-creators = 1
sort_buffer_size = 256M  #large enough for geometry sort operations

[mariadb] or [mysql]
max_allowed_packet=100M
innodb_log_file_size=300M  #no use for mysql
  1. If using MariaDB and you still have problems of type #1267 Illegal mix of collations, then check here on how to fix that,

Configure supervisord

Configure Supervisor, which is required for jobs. Create a file name opendatabio-worker.conf in the Supervisor configuration folder /etc/supervisor/conf.d/opendatabio-worker.conf with the following content:

touch /etc/supervisor/conf.d/opendatabio-worker.conf
echo ";--------------
[program:opendatabio-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /home/odbserver/opendatabio/artisan queue:work --sleep=3 --tries=1 --timeout=0 --memory=512
autostart=true
autorestart=true
user=odbserver
numprocs=8
redirect_stderr=true
stdout_logfile=/home/odbserver/opendatabio/storage/logs/supervisor.log
;--------------" > /etc/supervisor/conf.d/opendatabio-worker.conf

Folder permissions

  • Folders storage and bootstrap/cache must be writable by the Server user (usually www-data). Set 0755 permission to these directories.
  • Config .env file requires 0640 permission.
  • This link has different ways to set up permissions for files and folders of a Laravel application. Below the preferred method:
cd /home/odbserver

#give write permissions to odbserver user and the apache user
sudo chown -R odbserver:www-data opendatabio
sudo find ./opendatabio -type f -exec chmod 644 {} \;
sudo find ./opendatabio -type d -exec chmod 755 {} \;  

#in these folders the server stores data and files.
#Make sure their permission is correct
cd /home/odbserver/opendatabio
sudo chgrp -R www-data storage bootstrap/cache
sudo chmod -R ug+rwx storage bootstrap/cache

#make sure media folder has the correct permissions
sudo find ./storage/app/public/media  -type f -exec chmod 664 {} \;
sudo find ./storage/app/public/media  -type d -exec chmod 775 {} \;

#make sure the .env file has 640 permission
sudo chmod 640 ./.env

Install OpenDataBio

  1. Many Linux distributions (most notably Ubuntu and Debian) have different php.ini files for the command line interface and the Apache plugin. It is recommended to use the configuration file for Apache when running the install script, so it will be able to correctly point out missing extensions or configurations. To do so, find the correct path to the .ini file, and export it before using the php install command.

For example,

export PHPRC=/etc/php/8.3/apache2/php.ini
  1. The installation script will download the Composer dependency manager and all required PHP libraries listed in the composer.json file. However, if your server is behind a proxy, you should install and configure Composer independently. We have implemented PROXY configuration, but we are not using it anymore and have not tested properly (if you require adjustments, place an issue on GitLab).

  2. The script will prompt you configurations options, which are stored in the environment .env file in the application root folder.

You may, optionally, configure this file before running the installer:

  • Create a .env file with the contents of the provided cp .env.example .env
  • Read the comments in this file and adjust accordingly.
  • Make sure assets_url is correct for your deployment URL/subpath.
  1. Run the installer:
cd /home/odbserver/opendatabio
php install
  1. Build frontend assets after .env is configured (required when assets_url is added or changed):
npm run build
  1. Seed data - the script above will ask if you want to install seed data for Locations and Taxons - seed data is version specific. Check the seed data repository version notes.

Installation issues

There are countless possible ways to install the application, but they may involve more steps and configurations.

  • if you browser return 500|SERVER ERROR you should look to the last error in storage/logs/laravel.log. If you have ERROR: No application encryption key has been specified run:
php artisan key:generate
php artisan config:cache
  • If you receive the error “failed to open stream: Connection timed out” while running the installer, this indicates a misconfiguration of your IPv6 routing. The easiest fix is to disable IPv6 routing on the server.
  • If you receive errors during the random seeding of the database, you may attempt to remove the database entirely and rebuild it. Of course, do not run this on a production installation.
php artisan migrate:fresh
  • You may also replace the Locations and Taxons tables with seed data after a fresh migration using:
php seedodb

Post-install configs

  • If your import/export jobs are not being processed, make sure Supervisor is running systemctl start supervisord && systemctl enable supervisord, and check the log files at storage/logs/supervisor.log.
  • You can change several configuration variables for the application. The most important of those are probably set by the installer, and include database configuration and proxy settings, but many more exist in the .env and config/app.php files. In particular, you may want to change the language, timezone and e-mail settings. Run php artisan config:cache after updating the config files.
  • In order to stop search engine crawlers from indexing your database, add the following to your “robots.txt” in your server root folder (in Debian, /var/www/html):
User-agent: *
Disallow: /

Updating an existing Apache installation

Before updating, back up your database, .env, and storage/app/public/media. Before running commands, review config diffs for the target version:

  • Compare .env with .env.example (including assets_url)
  • Check PHP settings (php.ini in CLI and FPM/Apache)
  • Check Supervisor worker settings
  1. Put the application in maintenance mode:
cd /home/odbserver/opendatabio
php artisan down
  1. Update source code to the target version:
git fetch --tags
git checkout <target-tag-or-branch>
  1. Update dependencies and apply database migrations:
composer install --no-dev --optimize-autoloader
php artisan migrate:status
php artisan migrate --force
  1. Rebuild frontend assets after .env updates:
npm run build
  1. Refresh caches and restart queue workers:
php artisan optimize:clear
php artisan config:cache
php artisan queue:restart
echo "" > storage/logs/laravel.log
  1. Bring the application back online:
php artisan up

If the target version includes new environment variables (compare yours with the contents of .env.example), add them to .env before running asset/cache commands.

Storage & Backups

You may change storage configurations in config/filesystem.php, where you may define cloud based storage, which may be needed if have many users submitting media files, requiring lots of drive space.

  1. Data downloads are queue as jobs and a file is written in a temporary folder, and the file is deleted when the job is deleted by the user. This folder is defined as the download disk in filesystem.php config file, which point to storage/app/public/downloads. UserJobs web interface difficult navigation will force users to delete old jobs, but a cron cleaning job may be advisable to implement in your installation;
  2. Media files are by default stored in the media disk, which place files in folder storage/app/public/media;
  3. For regular configuration create both directories storage/app/public/downloads and storage/app/public/media with writable permissions by the Server user, see below topic;
  4. Remember to include media folder in a backup plan;

3 - Docker Installation

How to install OpenDataBio with Docker

The easiest way to install and run OpenDataBio is using Docker and the docker configuration files provided, which contain the required configuration to run OpenDataBio. It uses nginx, MySQL, and Supervisor for queues

Production profile

OpenDataBio now ships a production-oriented Docker profile:

  • docker/prod/nginx.conf
  • docker/prod/php.ini
  • docker/prod/www.conf
  • docker-compose.prod.yml

Run production compose with:

docker compose -f docker-compose.prod.yml build
docker compose -f docker-compose.prod.yml up -d

Key differences from dev:

  1. Uses docker/prod/* nginx/php-fpm configs.
  2. Removes source bind-mounts for app code.
  3. Disables phpMyAdmin by default (dev-only profile).
  4. Publishes nginx on port 80 (adjust if behind reverse proxy).

CSP in nginx (report-only) is included in docker/prod/nginx.conf. Keep report-only first, then enforce after validation.

Docker files

laravel-app/
----docker/*
----.env.docker
----docker-compose.yml
----Dockerfile
----Makefile

These are adapted from this link, where you find a production setting as well.

Installation


Download OpenDataBio

Prerequisites

  1. Docker with Compose plugin (docker compose v2).
  2. Linux/mac: user in the docker group or run with sudo.
  3. Windows: Docker Desktop (WSL2/Hyper-V enabled).

Quick start (Linux/mac, requires make)

cd opendatabio
make docker-init          # copies .env.docker, builds/starts, installs composer, key, migrates, storage:link
make docker-init SEED=1   # same as above + optional seed for Locations/Taxons
  • After configuring .env (or whenever ASSET_URL changes), rebuild assets:
npm ci #may need this in production
npm run build
  • App: http://localhost:8081 (user admin@example.org / password1)
  • phpMyAdmin: http://localhost:8082

Windows (PowerShell)

cd opendatabio
powershell -ExecutionPolicy Bypass -File scripts/docker-init.ps1
# optional seed
powershell -ExecutionPolicy Bypass -File scripts/docker-init.ps1 -Seed

Manual commands (if you do not have make installed)

cp .env.docker .env
docker compose up -d
docker compose exec -T -u www-data laravel composer install --optimize-autoloader
docker compose exec -T -u www-data laravel php artisan key:generate --force
docker compose exec -T -u www-data laravel php artisan migrate --force
docker compose exec -T -u www-data laravel php artisan storage:link

Optional seed without make:

docker compose exec -T -u www-data laravel php getseeds
docker exec -i odb_mysql mysql -uroot -psecret odbdocker < storage/Location*.sql
docker exec -i odb_mysql mysql -uroot -psecret odbdocker < storage/Taxon*.sql
rm storage/Location*.sql storage/Taxon*.sql

Data persistence

The docker images may be deleted without loosing any data. The mysql tables are stored in a volume. You may change to a local path bind.

docker volume list

Using

The Makefile file contains the following commands to interact with the docker containers and odb.

Commands to build and create the app

  1. make docker-init - copy .env.docker (if missing), build/start containers, install composer, key, migrate, storage:link
  2. make build - build containers
  3. make key-generate - generate the app key and adds to .env
  4. make composer-install - install php dependencie
  5. make composer-update - update php dependencies
  6. make composer-dump-autoload - execute composer dump-autoload within container
  7. make migrate - create or update the database
  8. make drop-migrate - delete and recreate the database
  9. make seed-odb - seed the database with locations and taxons

Commands to access the docker containers

  1. make start - start all containers
  2. make stop - stop all containers
  3. make restart - restart all containers
  4. make ssh - enter the main laravel app container
  5. make ssh-mysql - enter the mysql container, so you may the log to the database using mysql -uUSER -pPWD
  6. make mysql - enter the docker mysql console
  7. make ssh-nginx - enter the nginx container
  8. make ssh-supervisord - enter the supervisord container

Maintenance commands

  1. make optimize - clean caches and log files
  2. make info - show app info
  3. make logs - show laravel logs
  4. make logs-mysql - show mysql logs
  5. make logs-nginx - show nginx logs
  6. make logs-supervisord - show supervisor logs

Deleting & rebuilding

If you have issues and changed the docker files, you may need to rebuild:

#delete all images without loosing data
make stop  #first stop all
docker system prune -a  #and accepts Yes
make build
make start

Updating an existing Docker installation

Before updating, back up your database and storage/app/public/media. Before running commands, review config diffs for the target version:

  • Compare .env with .env.example (including assets_url)
  • Check PHP settings from the target profile (docker/prod/php.ini or your custom PHP config)
  • Check Supervisor settings (docker/supervisord.conf or your deployment equivalent)
  1. Update source code to the target version:
cd opendatabio
git fetch --tags
git checkout <target-tag-or-branch>
  1. Rebuild and restart containers:
make stop
make build
make start
  1. Update PHP dependencies and run database migrations:
make composer-install
make migrate
  1. Rebuild frontend assets after .env updates:
npm run build
  1. Refresh Laravel caches and restart queue workers:
make optimize
docker compose exec -T -u www-data laravel php artisan queue:restart

If the new version introduces changes in .env, add the new keys before asset/cache/worker refresh in production.

4 - Nginx Installation

How to install OpenDataBio with nginx

These instructions are for an nginx-based installation. If you prefer Apache, use the Apache installation page.

Server requirements

  1. Supported PHP version >= 8.2 (8.3 recommended).
  2. Web server: nginx.
  3. SQL database: MySQL or MariaDB (tested with MySQL 8.0 and MariaDB 10.6+).
  4. Required PHP extensions: openssl, pdo, pdo_mysql, mbstring, tokenizer, xml, dom, gd, exif, bcmath, zip, curl, redis.
  5. Redis for queues/cache.
  6. Tectonic for label PDF generation.
  7. Pandoc for bibliographic rendering (recommended).
  8. Supervisor for background jobs.

Nginx site config

Create your site config file (example):

sudo nano /etc/nginx/sites-available/opendatabio

Use this base server block (adjust paths/domain):

server {
    listen 80;
    server_name your-domain.example;

    root /home/odbserver/opendatabio/public;
    index index.php index.html;

    charset utf-8;
    client_max_body_size 300M;

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    location ~ \.php$ {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass unix:/var/run/php/php8.3-fpm.sock;
        fastcgi_index index.php;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $fastcgi_path_info;
        fastcgi_read_timeout 300;
    }

    location ~ /\. {
        deny all;
    }
}

Enable and reload:

sudo ln -s /etc/nginx/sites-available/opendatabio /etc/nginx/sites-enabled/opendatabio
sudo nginx -t
sudo systemctl reload nginx

Content Security Policy (CSP)

Edit the same nginx site file and add inside the server { ... } block:

add_header Content-Security-Policy-Report-Only "
  default-src 'self';
  base-uri 'self';
  form-action 'self';
  frame-ancestors 'self';
  object-src 'none';
  script-src 'self' 'unsafe-eval';
  style-src 'self' 'unsafe-inline';
  img-src 'self' data: https://server.arcgisonline.com https://*.tile.openstreetmap.org;
  font-src 'self' data:;
  connect-src 'self';
" always;

Then reload:

sudo nginx -t
sudo systemctl reload nginx

Notes:

  1. Start with Report-Only, then move to enforced CSP after validating logs.
  2. https://server.arcgisonline.com and https://*.tile.openstreetmap.org are required for map tiles.

Subpath installs (/opendatabio)

If your installation runs under a subpath (for example http://localhost/opendatabio), set in .env:

APP_URL=http://localhost/opendatabio
ASSET_URL=http://localhost/opendatabio

Then refresh generated assets and Livewire files:

php artisan livewire:publish --assets
php artisan optimize:clear
npm run build

Shared application setup

To avoid repeating the same instructions, use these sections from Apache installation (they also apply to nginx deployments):

  1. PHP settings (php.ini) in Apache Installation
  2. Configure supervisord in Apache Installation
  3. Folder permissions in Apache Installation
  4. Install OpenDataBio in Apache Installation
  5. Post-install configs in Apache Installation

5 - Customize Installation

How to customize the web interface!

Simple changes that can be implemented in the layout of a OpenDataBio web site

Logo and BackGround Image

To replace the Navigation bar logo and the image of the landing page, just put your image files replacing the files in /public/custom/ without changing their names.

Texts and Info

To change the welcome text of the landing page, change the values of the array keys in the following files:

  • /resources/lang/en/customs.php
  • /resources/lang/pt-br/customs.php
  • Do not remove the entry keys. Set to null to suppress from appearing in the footer and landing page.

Local Documentation

You can add documentation in *.md format to the repository in files located in the following folders:

  • /resources/docs/en/*
  • /resources/docs/pt/*

This space is reserved for administrators to set documentation and custom directives for users of a specific OpenDataBio installation. For example, this is a place to include a code of conduct for users, information on who to contact to become a full user, specific tutorials, and so on.

  1. If you want to change the color of the top navigation bar and the footer, just replace css Boostrap 5 class in the corresponding tags and files in folder /resources/view/layout.
  2. You may add additional html to the footer and navbar, change logo size, etc… as you wish.

6 - Upgrade OpenDataBio

Safe upgrade instructions for OpenDataBio installations

Use this guide to upgrade an existing OpenDataBio installation with minimal downtime.

Before you start

  1. Read the target release notes and confirm any breaking changes.
  2. Back up at least:
    • Database dump
    • .env
    • storage/app/public/media
  3. Compare current config files against target-version templates/settings:
    • .env against .env.example (including assets_url)
    • Supervisor worker config (/etc/supervisor/conf.d/opendatabio-worker.conf or container equivalent)
    • PHP config (php.ini for CLI and FPM/Apache)
  4. Plan a maintenance window for production.

Upgrade (Apache or nginx installation)

  1. Put application in maintenance mode:
cd /home/odbserver/opendatabio
php artisan down
  1. Update source code:
git fetch --tags
git checkout <target-tag-or-branch>
  1. Install dependencies and run migrations:
composer install --no-dev --optimize-autoloader
php artisan migrate:status
php artisan migrate --force
  1. Rebuild frontend assets after .env changes (required when assets_url changes):
npm run build
  1. Refresh caches, clean logs, and restart workers:
php artisan optimize:clear
php artisan config:cache
php artisan queue:restart
systemctl restart supervisor.service, nginx+phpfpm or apache, mysql or mariadb
echo "" > storage/logs/laravel.log
echo "" > storage/logs/supervisor.log
  1. Bring app back online:
php artisan up

Upgrade (Docker installation)

  1. Update source code:
cd opendatabio
git fetch --tags
git checkout <target-tag-or-branch>
  1. Rebuild and restart containers:
make stop
make build
make start
  1. Install dependencies and run migrations:
make composer-install
make migrate
  1. Rebuild frontend assets after .env changes (required when assets_url changes):
npm run build
  1. Refresh caches and restart workers:
make optimize
docker compose exec -T -u www-data laravel php artisan queue:restart

Environment variables

If the target version introduces new environment variables, compare .env with .env.example and add missing keys before asset/cache/worker commands in production.

Rollback strategy

If something fails after migration:

  1. Keep maintenance mode on.
  2. Restore database backup and .env.
  3. Checkout the previous known-good tag.
  4. Rebuild dependencies/containers and validate logs before php artisan up.