Skip to content

Latest commit

 

History

History
179 lines (119 loc) · 12 KB

DeveloperDocs.md

File metadata and controls

179 lines (119 loc) · 12 KB

To make a digital twin, we recommend forking the entire DTBase repository and developing on that fork. Code changes to the repository will be inevitable as you customise the twin to your needs. You should view DTBase as a starting point from which to develop, rather than as a package ready to be deployed.

This document describes various aspects of the development workflow, most importantly how to run DTBase locally on your machine for development purposes.

Running an Instance of DTBase

DTBase has three main components:

  • PostgreSQL Database
  • Backend API
  • Web frontend

Each part can be run and hosted separately of the others. For instance, it's possible to run the backend and frontend locally, but host the database somewhere else. This document will offer a number of suggestions of how to run the entire application stack.

Run Entire Application Locally via Docker Compose

The easiest way to run all aspects of DTBase together is to use Docker compose. Follow these steps to do so:

  1. Install Docker

  2. DTBase makes use of a set of secret values, such as database passwords and encryption keys for logins. These are kept in the .secrets folder in a Bash shell script that sets environment variables with the secret values. For your own deployment you should:

    • Copy the file .secrets/dtenv_template.sh to .secrets/dtenv_docker_deployment.sh (this could be named anything).
    • Populate this file with following variables:
    #!/bin/bash
    
    # Dev database
    export DT_SQL_USER="postgres"
    export DT_SQL_PASS="<REPLACE_ME>"
    export DT_SQL_DBNAME="dtdb"
    
    # Secrets for the web servers
    export DT_DEFAULT_USER_PASS="<REPLACE_ME>"
    export DT_FRONT_SECRET_KEY="<REPLACE_ME>"
    export DT_JWT_SECRET_KEY="<REPLACE_ME>"
    

    You should, of course, replace all the <REPLACE_ME> values with long strings of random characters, or other good passwords. They won't be need to be human-memorisable. If any of these values leak anyone can gain admin access to your DTBase deployment!

    You can find more documentation about what each of the environment variables are for in .secrets/dtenv_template.sh.

  3. Source these environment variables: source .secrets/dtenv_docker_deployment.sh

  4. Run docker compose up -d to build the images and run all the containers.

The backend and frontend are exposed at http://localhost:5000 and http://localhost:8000 respectively. A volume is attached to the containers and will persist even after the containers have been stopped or deleted. The volume contains all the data from the database. If you want to start the application with a fresh database, then delete the volume before calling docker compose again.

Using Docker compose is convenient for a quick local deployment. However, when changes are made to the code, the images need to be rebuilt. This doesn't take too long but isn't ideal. It's probably possible to tweak the dockerfile and compose files to mount the code so an image rebuild is not needed, but we haven't done this.

If you don't want to spend time editing the Docker setup, don't want to rebuild images each time a code change is made or simply want to avoid Docker as much as possible, or just want to run the test suite, then look at deploying individual components via the command line.

Run Individual Components via the Command Line

Managing Secrets and Environment Variables

First step is to setup environment variables. DTBase makes use of a set of secret values, such as database passwords and encryption keys for logins. These are kept in the .secrets folder in a Bash shell script that sets environment variables with the secret values. For your own deployment you should

  1. Copy the file .secrets/dtenv_template.sh to .secrets/dtenv_localdb.sh
  2. Populate this file with following variables:
    #!/bin/bash
    
    # Test database
    export DT_SQL_TESTUSER="postgres"
    export DT_SQL_TESTPASS="password"
    export DT_SQL_TESTHOST="localhost"
    export DT_SQL_TESTPORT="5432"
    export DT_SQL_TESTDBNAME="test_db"
    
    # Dev database
    export DT_SQL_USER="postgres"
    export DT_SQL_PASS="<REPLACE_ME>"
    export DT_SQL_HOST="localhost"
    export DT_SQL_PORT="5432"
    export DT_SQL_DBNAME="dtdb"
    
    export DT_DEFAULT_USER_PASS="<REPLACE_ME>"
    
    # Secrets for the web servers
    export DT_DEFAULT_USER_PASS="<REPLACE_ME>"
    export DT_FRONT_SECRET_KEY="<REPLACE_ME>"
    export DT_JWT_SECRET_KEY="<REPLACE_ME>"
    
    You should, of course, replace all the <REPLACE_ME> values with long strings of random characters, or other good passwords. They won't be need to be human-memorisable. If any of these values leak anyone can gain admin access to your DTBase deployment!

You can find more documentation about what each of the environment variables are for in .secrets/dtenv_template.sh.

In any terminal session where you want to e.g. run one of the web servers, you will need start out by sourcing the secrets file, with source .secrets/dtenv_localdb.sh.

Installing DTBase

Running the seperate components locally via the command line requires DTBase to be installed as a package. Instructions are as follows:

  1. We recommend that you use a fresh Python environment (either via virtualenv, conda, or poetry), and Python version 3.10 <= and <=3.12.
  2. Navigate to the root directory of the DTBase repository.
  3. Install the dtbase package and dependencies (including the optional development dependencies) by running
pip install '.[dev]'

Running a Local Database

The easiest way to run a PostgreSQL server locally is using a prebuilt Docker image.

  1. Run source .secrets/dtenv_localdb.sh.

  2. Install Docker

  3. Run a PostgreSQL server in a Docker container:

    docker run --name dtbase-postgresql -e POSTGRES_PASSWORD=${DT_SQL_PASS} -e POSTGRES_USER=${DT_SQL_USER} -p 5432:5432 -d postgres:16

    The name dtbase-postgresql can be changed to anything you'd like.

If you've done this setup once and then e.g. rebooted your machine, all you should need to do is run docker start dtbase-postgresql to restart the existing Docker container.

Running the tests

Once a local database has been setup then the tests can now be run with

python -m pytest

The tests spin up their own backend and frontend applications automatically.

Running the Backend Locally

The backend is a FastAPI app that provides API endpoints for reading from and writing to the database. To run it,

  1. Navigate to the directory dtbase/backend and run the command ./run_localdb.sh. This starts the FastAPI app listening on http://localhost:5000. You can test it by sending requests to some of its endpoints using e.g. Postman or the requests library. To see all the API endpoints and what they do, navigate to http://localhost:5000/docs in a web browser.
  2. Optionally, you can use different modes for the backend, by running e.g. DT_CONFIG_MODE=Debug ./run_localdb.sh to run in debug mode. The valid options for DT_CONFIG_MODE can be found in dtbase/backend/config.py.

Running the Frontend Locally

The DTBase frontend is a Flask webapp. To run it you need to

  1. Install npm (Node Package Manager)
  2. In a new terminal session, again run source .secrets/dtenv_localdb.sh
  3. Navigate to the directory dtbase/frontend and run the command ./run.sh.
  4. You should now be able to view the frontend on your browser at http://localhost:8000.
  5. You can log in with the username default_user@localhost and the password you set as DT_DEFAULT_USER_PASS above when you created .secrets/dtenv_localdb.sh.
  6. Like for the backend, you can use different modes for the frontend, by running e.g. DT_CONFIG_MODE=Auto-login ./run.sh to be always automatically logged in as the default user. The valid options for DT_CONFIG_MODE for the frontend can be found in dtbase/frontend/config.py.

A tip: When developing the frontend, it can be very handy to run it with FLASK_DEBUG=true DT_CONFIG_MODE=Auto-login ./run.sh. The first environment variable makes it such that Flask restarts every time you make a change to the code. (The backend already by default has a similar autorefresh option enabled.) The second one makes Flask automatically log in as the default user. This way when you make code changes, you can see the effect immediately in your browser without having to restart and/or log in.

Running with an Non-local Database

Sometimes you want to run the backend and the frontend locally, but have the database reside elsewhere, e.g. on Azure. To do this,

  1. Copy the file .secrets/dtenv_template.sh to .secrets/dtenv.sh and populate this file with values for the various environment variables. These are mostly the same as above when running a local database, except for DT_SQL_HOST, DT_SQL_PORT, DT_SQL_USER, and DT_SQL_PASS. These should be set to match the hostname, port, username, and password for the PostgreSQL server, where ever it is running. If you want to run against a database on Azure deployed by Pulumi, you may want to consult your Pulumi config for e.g. the password.

To run the backend with this new set of environment variables, go to dtbase/backend and run ./run.sh. The frontend can be run exactly the same way as when your database is local.

Note that you may need to whitelist your IP address on the PostgreSQL server for it to accept your connection. This is necessary when the database is hosted on Azure, for instance.

Running All Components in the Cloud

Please refer to the infrastructure section of the main docs for documentation of how to deploy the entire application stack on Azure. There is no reason you couldn't use other platforms besides Azure, but for historical reasons we don't have any instructions of how to do this.

Writing Code

Linting and Formatting

We run a set of linters and formatters on all code using pre-commit. It is installed as a dev dependency when you run pip install .[dev]. You also need to make sure you've run npm install --prefix dtbase/frontend/ --include=dev to install the linters and formatters for the frontend code. We recommend running pre-commit install so that pre-commit gets run every time you git commit, and only allows you to commit if the checks pass. If you need to bypass such checks for some commit you can do so with git commit --no-verify.

Code Changes to a Live Twin

Once you've made changes to the code, or perhaps pulled upstream changes that someone else has made, you may wonder how this affects the data of your existing DTBase deployment. If you've used Docker compose, as long as you don't delete the Docker volume it creates, your database will remain intact as you rebuild the frontend and backend containers or restart any or all of the containers. If you're managing the whole deployment without Docker, as is typical for development purposes, your data is stored in the PostgreSQL container that above we called dtbase-postgresql (you may have chosen a different name). If you do want to delete all your data and start fresh, you need to remove (not just stop, but really delete) that container and start a new one, or if using the Docker compose route, delete the volume created by Docker compose, and run docker compose again to recreate it.

More Information

We recommend reading the docs.md file to gain an understanding of what the various parts of the codebase do and how they work.